From e7096c131e5161fa3b8e52a650d7719d2857adfd Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 9 Dec 2019 00:27:34 +0100 Subject: net: WireGuard secure network tunnel WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. Extensive documentation and description of the protocol and considerations, along with formal proofs of the cryptography, are available at: * https://www.wireguard.com/ * https://www.wireguard.com/papers/wireguard.pdf This commit implements WireGuard as a simple network device driver, accessible in the usual RTNL way used by virtual network drivers. It makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of networking subsystem APIs. It has a somewhat novel multicore queueing system designed for maximum throughput and minimal latency of encryption operations, but it is implemented modestly using workqueues and NAPI. Configuration is done via generic Netlink, and following a review from the Netlink maintainer a year ago, several high profile userspace tools have already implemented the API. This commit also comes with several different tests, both in-kernel tests and out-of-kernel tests based on network namespaces, taking profit of the fact that sockets used by WireGuard intentionally stay in the namespace the WireGuard interface was originally created, exactly like the semantics of userspace tun devices. See wireguard.com/netns/ for pictures and examples. The source code is fairly short, but rather than combining everything into a single file, WireGuard is developed as cleanly separable files, making auditing and comprehension easier. Things are laid out as follows: * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the cryptographic aspects of the protocol, and are mostly data-only in nature, taking in buffers of bytes and spitting out buffers of bytes. They also handle reference counting for their various shared pieces of data, like keys and key lists. * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for ratelimiting certain types of cryptographic operations in accordance with particular WireGuard semantics. * allowedips.[ch], peerlookup.[ch]: The main lookup structures of WireGuard, the former being trie-like with particular semantics, an integral part of the design of the protocol, and the latter just being nice helper functions around the various hashtables we use. * device.[ch]: Implementation of functions for the netdevice and for rtnl, responsible for maintaining the life of a given interface and wiring it up to the rest of WireGuard. * peer.[ch]: Each interface has a list of peers, with helper functions available here for creation, destruction, and reference counting. * socket.[ch]: Implementation of functions related to udp_socket and the general set of kernel socket APIs, for sending and receiving ciphertext UDP packets, and taking care of WireGuard-specific sticky socket routing semantics for the automatic roaming. * netlink.[ch]: Userspace API entry point for configuring WireGuard peers and devices. The API has been implemented by several userspace tools and network management utility, and the WireGuard project distributes the basic wg(8) tool. * queueing.[ch]: Shared function on the rx and tx path for handling the various queues used in the multicore algorithms. * send.c: Handles encrypting outgoing packets in parallel on multiple cores, before sending them in order on a single core, via workqueues and ring buffers. Also handles sending handshake and cookie messages as part of the protocol, in parallel. * receive.c: Handles decrypting incoming packets in parallel on multiple cores, before passing them off in order to be ingested via the rest of the networking subsystem with GRO via the typical NAPI poll function. Also handles receiving handshake and cookie messages as part of the protocol, in parallel. * timers.[ch]: Uses the timer wheel to implement protocol particular event timeouts, and gives a set of very simple event-driven entry point functions for callers. * main.c, version.h: Initialization and deinitialization of the module. * selftest/*.h: Runtime unit tests for some of the most security sensitive functions. * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing script using network namespaces. This commit aims to be as self-contained as possible, implementing WireGuard as a standalone module not needing much special handling or coordination from the network subsystem. I expect for future optimizations to the network stack to positively improve WireGuard, and vice-versa, but for the time being, this exists as intentionally standalone. We introduce a menu option for CONFIG_WIREGUARD, as well as providing a verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG. Signed-off-by: Jason A. Donenfeld Cc: David Miller Cc: Greg KH Cc: Linus Torvalds Cc: Herbert Xu Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller --- include/uapi/linux/wireguard.h | 196 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 196 insertions(+) create mode 100644 include/uapi/linux/wireguard.h (limited to 'include') diff --git a/include/uapi/linux/wireguard.h b/include/uapi/linux/wireguard.h new file mode 100644 index 000000000000..dd8a47c4ad11 --- /dev/null +++ b/include/uapi/linux/wireguard.h @@ -0,0 +1,196 @@ +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT */ +/* + * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. + * + * Documentation + * ============= + * + * The below enums and macros are for interfacing with WireGuard, using generic + * netlink, with family WG_GENL_NAME and version WG_GENL_VERSION. It defines two + * methods: get and set. Note that while they share many common attributes, + * these two functions actually accept a slightly different set of inputs and + * outputs. + * + * WG_CMD_GET_DEVICE + * ----------------- + * + * May only be called via NLM_F_REQUEST | NLM_F_DUMP. The command should contain + * one but not both of: + * + * WGDEVICE_A_IFINDEX: NLA_U32 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * + * The kernel will then return several messages (NLM_F_MULTI) containing the + * following tree of nested items: + * + * WGDEVICE_A_IFINDEX: NLA_U32 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * WGDEVICE_A_PRIVATE_KEY: NLA_EXACT_LEN, len WG_KEY_LEN + * WGDEVICE_A_PUBLIC_KEY: NLA_EXACT_LEN, len WG_KEY_LEN + * WGDEVICE_A_LISTEN_PORT: NLA_U16 + * WGDEVICE_A_FWMARK: NLA_U32 + * WGDEVICE_A_PEERS: NLA_NESTED + * 0: NLA_NESTED + * WGPEER_A_PUBLIC_KEY: NLA_EXACT_LEN, len WG_KEY_LEN + * WGPEER_A_PRESHARED_KEY: NLA_EXACT_LEN, len WG_KEY_LEN + * WGPEER_A_ENDPOINT: NLA_MIN_LEN(struct sockaddr), struct sockaddr_in or struct sockaddr_in6 + * WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL: NLA_U16 + * WGPEER_A_LAST_HANDSHAKE_TIME: NLA_EXACT_LEN, struct __kernel_timespec + * WGPEER_A_RX_BYTES: NLA_U64 + * WGPEER_A_TX_BYTES: NLA_U64 + * WGPEER_A_ALLOWEDIPS: NLA_NESTED + * 0: NLA_NESTED + * WGALLOWEDIP_A_FAMILY: NLA_U16 + * WGALLOWEDIP_A_IPADDR: NLA_MIN_LEN(struct in_addr), struct in_addr or struct in6_addr + * WGALLOWEDIP_A_CIDR_MASK: NLA_U8 + * 0: NLA_NESTED + * ... + * 0: NLA_NESTED + * ... + * ... + * WGPEER_A_PROTOCOL_VERSION: NLA_U32 + * 0: NLA_NESTED + * ... + * ... + * + * It is possible that all of the allowed IPs of a single peer will not + * fit within a single netlink message. In that case, the same peer will + * be written in the following message, except it will only contain + * WGPEER_A_PUBLIC_KEY and WGPEER_A_ALLOWEDIPS. This may occur several + * times in a row for the same peer. It is then up to the receiver to + * coalesce adjacent peers. Likewise, it is possible that all peers will + * not fit within a single message. So, subsequent peers will be sent + * in following messages, except those will only contain WGDEVICE_A_IFNAME + * and WGDEVICE_A_PEERS. It is then up to the receiver to coalesce these + * messages to form the complete list of peers. + * + * Since this is an NLA_F_DUMP command, the final message will always be + * NLMSG_DONE, even if an error occurs. However, this NLMSG_DONE message + * contains an integer error code. It is either zero or a negative error + * code corresponding to the errno. + * + * WG_CMD_SET_DEVICE + * ----------------- + * + * May only be called via NLM_F_REQUEST. The command should contain the + * following tree of nested items, containing one but not both of + * WGDEVICE_A_IFINDEX and WGDEVICE_A_IFNAME: + * + * WGDEVICE_A_IFINDEX: NLA_U32 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * WGDEVICE_A_FLAGS: NLA_U32, 0 or WGDEVICE_F_REPLACE_PEERS if all current + * peers should be removed prior to adding the list below. + * WGDEVICE_A_PRIVATE_KEY: len WG_KEY_LEN, all zeros to remove + * WGDEVICE_A_LISTEN_PORT: NLA_U16, 0 to choose randomly + * WGDEVICE_A_FWMARK: NLA_U32, 0 to disable + * WGDEVICE_A_PEERS: NLA_NESTED + * 0: NLA_NESTED + * WGPEER_A_PUBLIC_KEY: len WG_KEY_LEN + * WGPEER_A_FLAGS: NLA_U32, 0 and/or WGPEER_F_REMOVE_ME if the + * specified peer should not exist at the end of the + * operation, rather than added/updated and/or + * WGPEER_F_REPLACE_ALLOWEDIPS if all current allowed + * IPs of this peer should be removed prior to adding + * the list below and/or WGPEER_F_UPDATE_ONLY if the + * peer should only be set if it already exists. + * WGPEER_A_PRESHARED_KEY: len WG_KEY_LEN, all zeros to remove + * WGPEER_A_ENDPOINT: struct sockaddr_in or struct sockaddr_in6 + * WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL: NLA_U16, 0 to disable + * WGPEER_A_ALLOWEDIPS: NLA_NESTED + * 0: NLA_NESTED + * WGALLOWEDIP_A_FAMILY: NLA_U16 + * WGALLOWEDIP_A_IPADDR: struct in_addr or struct in6_addr + * WGALLOWEDIP_A_CIDR_MASK: NLA_U8 + * 0: NLA_NESTED + * ... + * 0: NLA_NESTED + * ... + * ... + * WGPEER_A_PROTOCOL_VERSION: NLA_U32, should not be set or used at + * all by most users of this API, as the + * most recent protocol will be used when + * this is unset. Otherwise, must be set + * to 1. + * 0: NLA_NESTED + * ... + * ... + * + * It is possible that the amount of configuration data exceeds that of + * the maximum message length accepted by the kernel. In that case, several + * messages should be sent one after another, with each successive one + * filling in information not contained in the prior. Note that if + * WGDEVICE_F_REPLACE_PEERS is specified in the first message, it probably + * should not be specified in fragments that come after, so that the list + * of peers is only cleared the first time but appened after. Likewise for + * peers, if WGPEER_F_REPLACE_ALLOWEDIPS is specified in the first message + * of a peer, it likely should not be specified in subsequent fragments. + * + * If an error occurs, NLMSG_ERROR will reply containing an errno. + */ + +#ifndef _WG_UAPI_WIREGUARD_H +#define _WG_UAPI_WIREGUARD_H + +#define WG_GENL_NAME "wireguard" +#define WG_GENL_VERSION 1 + +#define WG_KEY_LEN 32 + +enum wg_cmd { + WG_CMD_GET_DEVICE, + WG_CMD_SET_DEVICE, + __WG_CMD_MAX +}; +#define WG_CMD_MAX (__WG_CMD_MAX - 1) + +enum wgdevice_flag { + WGDEVICE_F_REPLACE_PEERS = 1U << 0, + __WGDEVICE_F_ALL = WGDEVICE_F_REPLACE_PEERS +}; +enum wgdevice_attribute { + WGDEVICE_A_UNSPEC, + WGDEVICE_A_IFINDEX, + WGDEVICE_A_IFNAME, + WGDEVICE_A_PRIVATE_KEY, + WGDEVICE_A_PUBLIC_KEY, + WGDEVICE_A_FLAGS, + WGDEVICE_A_LISTEN_PORT, + WGDEVICE_A_FWMARK, + WGDEVICE_A_PEERS, + __WGDEVICE_A_LAST +}; +#define WGDEVICE_A_MAX (__WGDEVICE_A_LAST - 1) + +enum wgpeer_flag { + WGPEER_F_REMOVE_ME = 1U << 0, + WGPEER_F_REPLACE_ALLOWEDIPS = 1U << 1, + WGPEER_F_UPDATE_ONLY = 1U << 2, + __WGPEER_F_ALL = WGPEER_F_REMOVE_ME | WGPEER_F_REPLACE_ALLOWEDIPS | + WGPEER_F_UPDATE_ONLY +}; +enum wgpeer_attribute { + WGPEER_A_UNSPEC, + WGPEER_A_PUBLIC_KEY, + WGPEER_A_PRESHARED_KEY, + WGPEER_A_FLAGS, + WGPEER_A_ENDPOINT, + WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL, + WGPEER_A_LAST_HANDSHAKE_TIME, + WGPEER_A_RX_BYTES, + WGPEER_A_TX_BYTES, + WGPEER_A_ALLOWEDIPS, + WGPEER_A_PROTOCOL_VERSION, + __WGPEER_A_LAST +}; +#define WGPEER_A_MAX (__WGPEER_A_LAST - 1) + +enum wgallowedip_attribute { + WGALLOWEDIP_A_UNSPEC, + WGALLOWEDIP_A_FAMILY, + WGALLOWEDIP_A_IPADDR, + WGALLOWEDIP_A_CIDR_MASK, + __WGALLOWEDIP_A_LAST +}; +#define WGALLOWEDIP_A_MAX (__WGALLOWEDIP_A_LAST - 1) + +#endif /* _WG_UAPI_WIREGUARD_H */ -- cgit v1.2.3-71-gd317 From b50b0580d27bc45a0637aefc8bac4d31aa85771a Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Mon, 25 Nov 2019 14:48:57 +0100 Subject: net: add queue argument to __skb_wait_for_more_packets and __skb_{,try_}recv_datagram This will be used by ESP over TCP to handle the queue of IKE messages. Signed-off-by: Sabrina Dubroca Acked-by: David S. Miller Signed-off-by: Steffen Klassert --- include/linux/skbuff.h | 11 ++++++++--- net/core/datagram.c | 27 +++++++++++++++++---------- net/ipv4/udp.c | 3 ++- net/unix/af_unix.c | 7 ++++--- 4 files changed, 31 insertions(+), 17 deletions(-) (limited to 'include') diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index e9133bcf0544..49a10f9cc538 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3459,7 +3459,8 @@ static inline void skb_frag_list_init(struct sk_buff *skb) for (iter = skb_shinfo(skb)->frag_list; iter; iter = iter->next) -int __skb_wait_for_more_packets(struct sock *sk, int *err, long *timeo_p, +int __skb_wait_for_more_packets(struct sock *sk, struct sk_buff_head *queue, + int *err, long *timeo_p, const struct sk_buff *skb); struct sk_buff *__skb_try_recv_from_queue(struct sock *sk, struct sk_buff_head *queue, @@ -3468,12 +3469,16 @@ struct sk_buff *__skb_try_recv_from_queue(struct sock *sk, struct sk_buff *skb), int *off, int *err, struct sk_buff **last); -struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned flags, +struct sk_buff *__skb_try_recv_datagram(struct sock *sk, + struct sk_buff_head *queue, + unsigned int flags, void (*destructor)(struct sock *sk, struct sk_buff *skb), int *off, int *err, struct sk_buff **last); -struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned flags, +struct sk_buff *__skb_recv_datagram(struct sock *sk, + struct sk_buff_head *sk_queue, + unsigned int flags, void (*destructor)(struct sock *sk, struct sk_buff *skb), int *off, int *err); diff --git a/net/core/datagram.c b/net/core/datagram.c index da3c24ed129c..a78e7f864c1e 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -84,7 +84,8 @@ static int receiver_wake_function(wait_queue_entry_t *wait, unsigned int mode, i /* * Wait for the last received packet to be different from skb */ -int __skb_wait_for_more_packets(struct sock *sk, int *err, long *timeo_p, +int __skb_wait_for_more_packets(struct sock *sk, struct sk_buff_head *queue, + int *err, long *timeo_p, const struct sk_buff *skb) { int error; @@ -97,7 +98,7 @@ int __skb_wait_for_more_packets(struct sock *sk, int *err, long *timeo_p, if (error) goto out_err; - if (READ_ONCE(sk->sk_receive_queue.prev) != skb) + if (READ_ONCE(queue->prev) != skb) goto out; /* Socket shut down? */ @@ -209,6 +210,7 @@ struct sk_buff *__skb_try_recv_from_queue(struct sock *sk, /** * __skb_try_recv_datagram - Receive a datagram skbuff * @sk: socket + * @queue: socket queue from which to receive * @flags: MSG\_ flags * @destructor: invoked under the receive lock on successful dequeue * @off: an offset in bytes to peek skb from. Returns an offset @@ -241,13 +243,14 @@ struct sk_buff *__skb_try_recv_from_queue(struct sock *sk, * quite explicitly by POSIX 1003.1g, don't change them without having * the standard around please. */ -struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags, +struct sk_buff *__skb_try_recv_datagram(struct sock *sk, + struct sk_buff_head *queue, + unsigned int flags, void (*destructor)(struct sock *sk, struct sk_buff *skb), int *off, int *err, struct sk_buff **last) { - struct sk_buff_head *queue = &sk->sk_receive_queue; struct sk_buff *skb; unsigned long cpu_flags; /* @@ -278,7 +281,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags, break; sk_busy_loop(sk, flags & MSG_DONTWAIT); - } while (READ_ONCE(sk->sk_receive_queue.prev) != *last); + } while (READ_ONCE(queue->prev) != *last); error = -EAGAIN; @@ -288,7 +291,9 @@ no_packet: } EXPORT_SYMBOL(__skb_try_recv_datagram); -struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, +struct sk_buff *__skb_recv_datagram(struct sock *sk, + struct sk_buff_head *sk_queue, + unsigned int flags, void (*destructor)(struct sock *sk, struct sk_buff *skb), int *off, int *err) @@ -299,15 +304,16 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); do { - skb = __skb_try_recv_datagram(sk, flags, destructor, off, err, - &last); + skb = __skb_try_recv_datagram(sk, sk_queue, flags, destructor, + off, err, &last); if (skb) return skb; if (*err != -EAGAIN) break; } while (timeo && - !__skb_wait_for_more_packets(sk, err, &timeo, last)); + !__skb_wait_for_more_packets(sk, sk_queue, err, + &timeo, last)); return NULL; } @@ -318,7 +324,8 @@ struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned int flags, { int off = 0; - return __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0), + return __skb_recv_datagram(sk, &sk->sk_receive_queue, + flags | (noblock ? MSG_DONTWAIT : 0), NULL, &off, err); } EXPORT_SYMBOL(skb_recv_datagram); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 4da5758cc718..e5738d1217a1 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1708,7 +1708,8 @@ busy_check: /* sk_queue is empty, reader_queue may contain peeked packets */ } while (timeo && - !__skb_wait_for_more_packets(sk, &error, &timeo, + !__skb_wait_for_more_packets(sk, &sk->sk_receive_queue, + &error, &timeo, (struct sk_buff *)sk_queue)); *err = error; diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 7cfdce10de36..a7f707fc4cac 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -2058,8 +2058,8 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg, mutex_lock(&u->iolock); skip = sk_peek_offset(sk, flags); - skb = __skb_try_recv_datagram(sk, flags, NULL, &skip, &err, - &last); + skb = __skb_try_recv_datagram(sk, &sk->sk_receive_queue, flags, + NULL, &skip, &err, &last); if (skb) break; @@ -2068,7 +2068,8 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg, if (err != -EAGAIN) break; } while (timeo && - !__skb_wait_for_more_packets(sk, &err, &timeo, last)); + !__skb_wait_for_more_packets(sk, &sk->sk_receive_queue, + &err, &timeo, last)); if (!skb) { /* implies iolock unlocked */ unix_state_lock(sk); -- cgit v1.2.3-71-gd317 From 7b3801927e52f8621de311277f7fc727635019e7 Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Mon, 25 Nov 2019 14:48:58 +0100 Subject: xfrm: introduce xfrm_trans_queue_net This will be used by TCP encapsulation to write packets to the encap socket without holding the user socket's lock. Without this reinjection, we're already holding the lock of the user socket, and then try to lock the encap socket as well when we enqueue the encrypted packet. While at it, add a BUILD_BUG_ON like we usually do for skb->cb, since it's missing for struct xfrm_trans_cb. Co-developed-by: Herbert Xu Signed-off-by: Herbert Xu Signed-off-by: Sabrina Dubroca Acked-by: David S. Miller Signed-off-by: Steffen Klassert --- include/net/xfrm.h | 3 +++ net/xfrm/xfrm_input.c | 21 +++++++++++++++++---- 2 files changed, 20 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/net/xfrm.h b/include/net/xfrm.h index dda3c025452e..56ff86621bb4 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1547,6 +1547,9 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload); int xfrm_init_state(struct xfrm_state *x); int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type); int xfrm_input_resume(struct sk_buff *skb, int nexthdr); +int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb, + int (*finish)(struct net *, struct sock *, + struct sk_buff *)); int xfrm_trans_queue(struct sk_buff *skb, int (*finish)(struct net *, struct sock *, struct sk_buff *)); diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c index 2c86a2fc3915..aa35f23c4912 100644 --- a/net/xfrm/xfrm_input.c +++ b/net/xfrm/xfrm_input.c @@ -36,6 +36,7 @@ struct xfrm_trans_cb { #endif } header; int (*finish)(struct net *net, struct sock *sk, struct sk_buff *skb); + struct net *net; }; #define XFRM_TRANS_SKB_CB(__skb) ((struct xfrm_trans_cb *)&((__skb)->cb[0])) @@ -766,12 +767,13 @@ static void xfrm_trans_reinject(unsigned long data) skb_queue_splice_init(&trans->queue, &queue); while ((skb = __skb_dequeue(&queue))) - XFRM_TRANS_SKB_CB(skb)->finish(dev_net(skb->dev), NULL, skb); + XFRM_TRANS_SKB_CB(skb)->finish(XFRM_TRANS_SKB_CB(skb)->net, + NULL, skb); } -int xfrm_trans_queue(struct sk_buff *skb, - int (*finish)(struct net *, struct sock *, - struct sk_buff *)) +int xfrm_trans_queue_net(struct net *net, struct sk_buff *skb, + int (*finish)(struct net *, struct sock *, + struct sk_buff *)) { struct xfrm_trans_tasklet *trans; @@ -780,11 +782,22 @@ int xfrm_trans_queue(struct sk_buff *skb, if (skb_queue_len(&trans->queue) >= netdev_max_backlog) return -ENOBUFS; + BUILD_BUG_ON(sizeof(struct xfrm_trans_cb) > sizeof(skb->cb)); + XFRM_TRANS_SKB_CB(skb)->finish = finish; + XFRM_TRANS_SKB_CB(skb)->net = net; __skb_queue_tail(&trans->queue, skb); tasklet_schedule(&trans->tasklet); return 0; } +EXPORT_SYMBOL(xfrm_trans_queue_net); + +int xfrm_trans_queue(struct sk_buff *skb, + int (*finish)(struct net *, struct sock *, + struct sk_buff *)) +{ + return xfrm_trans_queue_net(dev_net(skb->dev), skb, finish); +} EXPORT_SYMBOL(xfrm_trans_queue); void __init xfrm_input_init(void) -- cgit v1.2.3-71-gd317 From e27cca96cd68fa2c6814c90f9a1cfd36bb68c593 Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Mon, 25 Nov 2019 14:49:02 +0100 Subject: xfrm: add espintcp (RFC 8229) TCP encapsulation of IKE and IPsec messages (RFC 8229) is implemented as a TCP ULP, overriding in particular the sendmsg and recvmsg operations. A Stream Parser is used to extract messages out of the TCP stream using the first 2 bytes as length marker. Received IKE messages are put on "ike_queue", waiting to be dequeued by the custom recvmsg implementation. Received ESP messages are sent to XFRM, like with UDP encapsulation. Some of this code is taken from the original submission by Herbert Xu. Currently, only IPv4 is supported, like for UDP encapsulation. Co-developed-by: Herbert Xu Signed-off-by: Herbert Xu Signed-off-by: Sabrina Dubroca Acked-by: David S. Miller Signed-off-by: Steffen Klassert --- include/net/espintcp.h | 39 ++++ include/net/xfrm.h | 1 + include/uapi/linux/udp.h | 1 + net/ipv4/Kconfig | 11 + net/ipv4/esp4.c | 191 +++++++++++++++++- net/xfrm/Makefile | 1 + net/xfrm/espintcp.c | 509 +++++++++++++++++++++++++++++++++++++++++++++++ net/xfrm/xfrm_policy.c | 7 + net/xfrm/xfrm_state.c | 3 + 9 files changed, 760 insertions(+), 3 deletions(-) create mode 100644 include/net/espintcp.h create mode 100644 net/xfrm/espintcp.c (limited to 'include') diff --git a/include/net/espintcp.h b/include/net/espintcp.h new file mode 100644 index 000000000000..dd7026a00066 --- /dev/null +++ b/include/net/espintcp.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _NET_ESPINTCP_H +#define _NET_ESPINTCP_H + +#include +#include + +void __init espintcp_init(void); + +int espintcp_push_skb(struct sock *sk, struct sk_buff *skb); +int espintcp_queue_out(struct sock *sk, struct sk_buff *skb); +bool tcp_is_ulp_esp(struct sock *sk); + +struct espintcp_msg { + struct sk_buff *skb; + struct sk_msg skmsg; + int offset; + int len; +}; + +struct espintcp_ctx { + struct strparser strp; + struct sk_buff_head ike_queue; + struct sk_buff_head out_queue; + struct espintcp_msg partial; + void (*saved_data_ready)(struct sock *sk); + void (*saved_write_space)(struct sock *sk); + struct work_struct work; + bool tx_running; +}; + +static inline struct espintcp_ctx *espintcp_getctx(const struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + + /* RCU is only needed for diag */ + return (__force void *)icsk->icsk_ulp_data; +} +#endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 56ff86621bb4..8f71c111e65a 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -193,6 +193,7 @@ struct xfrm_state { /* Data for encapsulator */ struct xfrm_encap_tmpl *encap; + struct sock __rcu *encap_sk; /* Data for care-of address */ xfrm_address_t *coaddr; diff --git a/include/uapi/linux/udp.h b/include/uapi/linux/udp.h index 30baccb6c9c4..4828794efcf8 100644 --- a/include/uapi/linux/udp.h +++ b/include/uapi/linux/udp.h @@ -42,5 +42,6 @@ struct udphdr { #define UDP_ENCAP_GTP0 4 /* GSM TS 09.60 */ #define UDP_ENCAP_GTP1U 5 /* 3GPP TS 29.060 */ #define UDP_ENCAP_RXRPC 6 +#define TCP_ENCAP_ESPINTCP 7 /* Yikes, this is really xfrm encap types. */ #endif /* _UAPI_LINUX_UDP_H */ diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig index fc816b187170..f96bd489b362 100644 --- a/net/ipv4/Kconfig +++ b/net/ipv4/Kconfig @@ -378,6 +378,17 @@ config INET_ESP_OFFLOAD If unsure, say N. +config INET_ESPINTCP + bool "IP: ESP in TCP encapsulation (RFC 8229)" + depends on XFRM && INET_ESP + select STREAM_PARSER + select NET_SOCK_MSG + help + Support for RFC 8229 encapsulation of ESP and IKE over + TCP/IPv4 sockets. + + If unsure, say N. + config INET_IPCOMP tristate "IP: IPComp transformation" select INET_XFRM_TUNNEL diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 033c61d27148..103c7d599a3c 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -18,6 +18,8 @@ #include #include #include +#include +#include #include @@ -117,6 +119,132 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp) put_page(sg_page(sg)); } +#ifdef CONFIG_INET_ESPINTCP +struct esp_tcp_sk { + struct sock *sk; + struct rcu_head rcu; +}; + +static void esp_free_tcp_sk(struct rcu_head *head) +{ + struct esp_tcp_sk *esk = container_of(head, struct esp_tcp_sk, rcu); + + sock_put(esk->sk); + kfree(esk); +} + +static struct sock *esp_find_tcp_sk(struct xfrm_state *x) +{ + struct xfrm_encap_tmpl *encap = x->encap; + struct esp_tcp_sk *esk; + __be16 sport, dport; + struct sock *nsk; + struct sock *sk; + + sk = rcu_dereference(x->encap_sk); + if (sk && sk->sk_state == TCP_ESTABLISHED) + return sk; + + spin_lock_bh(&x->lock); + sport = encap->encap_sport; + dport = encap->encap_dport; + nsk = rcu_dereference_protected(x->encap_sk, + lockdep_is_held(&x->lock)); + if (sk && sk == nsk) { + esk = kmalloc(sizeof(*esk), GFP_ATOMIC); + if (!esk) { + spin_unlock_bh(&x->lock); + return ERR_PTR(-ENOMEM); + } + RCU_INIT_POINTER(x->encap_sk, NULL); + esk->sk = sk; + call_rcu(&esk->rcu, esp_free_tcp_sk); + } + spin_unlock_bh(&x->lock); + + sk = inet_lookup_established(xs_net(x), &tcp_hashinfo, x->id.daddr.a4, + dport, x->props.saddr.a4, sport, 0); + if (!sk) + return ERR_PTR(-ENOENT); + + if (!tcp_is_ulp_esp(sk)) { + sock_put(sk); + return ERR_PTR(-EINVAL); + } + + spin_lock_bh(&x->lock); + nsk = rcu_dereference_protected(x->encap_sk, + lockdep_is_held(&x->lock)); + if (encap->encap_sport != sport || + encap->encap_dport != dport) { + sock_put(sk); + sk = nsk ?: ERR_PTR(-EREMCHG); + } else if (sk == nsk) { + sock_put(sk); + } else { + rcu_assign_pointer(x->encap_sk, sk); + } + spin_unlock_bh(&x->lock); + + return sk; +} + +static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb) +{ + struct sock *sk; + int err; + + rcu_read_lock(); + + sk = esp_find_tcp_sk(x); + err = PTR_ERR_OR_ZERO(sk); + if (err) + goto out; + + bh_lock_sock(sk); + if (sock_owned_by_user(sk)) + err = espintcp_queue_out(sk, skb); + else + err = espintcp_push_skb(sk, skb); + bh_unlock_sock(sk); + +out: + rcu_read_unlock(); + return err; +} + +static int esp_output_tcp_encap_cb(struct net *net, struct sock *sk, + struct sk_buff *skb) +{ + struct dst_entry *dst = skb_dst(skb); + struct xfrm_state *x = dst->xfrm; + + return esp_output_tcp_finish(x, skb); +} + +static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb) +{ + int err; + + local_bh_disable(); + err = xfrm_trans_queue_net(xs_net(x), skb, esp_output_tcp_encap_cb); + local_bh_enable(); + + /* EINPROGRESS just happens to do the right thing. It + * actually means that the skb has been consumed and + * isn't coming back. + */ + return err ?: -EINPROGRESS; +} +#else +static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb) +{ + kfree_skb(skb); + + return -EOPNOTSUPP; +} +#endif + static void esp_output_done(struct crypto_async_request *base, int err) { struct sk_buff *skb = base->data; @@ -147,7 +275,11 @@ static void esp_output_done(struct crypto_async_request *base, int err) secpath_reset(skb); xfrm_dev_resume(skb); } else { - xfrm_output_resume(skb, err); + if (!err && + x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) + esp_output_tail_tcp(x, skb); + else + xfrm_output_resume(skb, err); } } @@ -236,7 +368,7 @@ static struct ip_esp_hdr *esp_output_udp_encap(struct sk_buff *skb, unsigned int len; len = skb->len + esp->tailen - skb_transport_offset(skb); - if (len + sizeof(struct iphdr) >= IP_MAX_MTU) + if (len + sizeof(struct iphdr) > IP_MAX_MTU) return ERR_PTR(-EMSGSIZE); uh = (struct udphdr *)esp->esph; @@ -256,6 +388,41 @@ static struct ip_esp_hdr *esp_output_udp_encap(struct sk_buff *skb, return (struct ip_esp_hdr *)(uh + 1); } +#ifdef CONFIG_INET_ESPINTCP +static struct ip_esp_hdr *esp_output_tcp_encap(struct xfrm_state *x, + struct sk_buff *skb, + struct esp_info *esp) +{ + __be16 *lenp = (void *)esp->esph; + struct ip_esp_hdr *esph; + unsigned int len; + struct sock *sk; + + len = skb->len + esp->tailen - skb_transport_offset(skb); + if (len > IP_MAX_MTU) + return ERR_PTR(-EMSGSIZE); + + rcu_read_lock(); + sk = esp_find_tcp_sk(x); + rcu_read_unlock(); + + if (IS_ERR(sk)) + return ERR_CAST(sk); + + *lenp = htons(len); + esph = (struct ip_esp_hdr *)(lenp + 1); + + return esph; +} +#else +static struct ip_esp_hdr *esp_output_tcp_encap(struct xfrm_state *x, + struct sk_buff *skb, + struct esp_info *esp) +{ + return ERR_PTR(-EOPNOTSUPP); +} +#endif + static int esp_output_encap(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) { @@ -276,6 +443,9 @@ static int esp_output_encap(struct xfrm_state *x, struct sk_buff *skb, case UDP_ENCAP_ESPINUDP_NON_IKE: esph = esp_output_udp_encap(skb, encap_type, esp, sport, dport); break; + case TCP_ENCAP_ESPINTCP: + esph = esp_output_tcp_encap(x, skb, esp); + break; } if (IS_ERR(esph)) @@ -296,7 +466,7 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info * struct sk_buff *trailer; int tailen = esp->tailen; - /* this is non-NULL only with UDP Encapsulation */ + /* this is non-NULL only with TCP/UDP Encapsulation */ if (x->encap) { int err = esp_output_encap(x, skb, esp); @@ -491,6 +661,9 @@ int esp_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info * if (sg != dsg) esp_ssg_unref(x, tmp); + if (!err && x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) + err = esp_output_tail_tcp(x, skb); + error_free: kfree(tmp); error: @@ -617,10 +790,14 @@ int esp_input_done2(struct sk_buff *skb, int err) if (x->encap) { struct xfrm_encap_tmpl *encap = x->encap; + struct tcphdr *th = (void *)(skb_network_header(skb) + ihl); struct udphdr *uh = (void *)(skb_network_header(skb) + ihl); __be16 source; switch (x->encap->encap_type) { + case TCP_ENCAP_ESPINTCP: + source = th->source; + break; case UDP_ENCAP_ESPINUDP: case UDP_ENCAP_ESPINUDP_NON_IKE: source = uh->source; @@ -1017,6 +1194,14 @@ static int esp_init_state(struct xfrm_state *x) case UDP_ENCAP_ESPINUDP_NON_IKE: x->props.header_len += sizeof(struct udphdr) + 2 * sizeof(u32); break; +#ifdef CONFIG_INET_ESPINTCP + case TCP_ENCAP_ESPINTCP: + /* only the length field, TCP encap is done by + * the socket + */ + x->props.header_len += 2; + break; +#endif } } diff --git a/net/xfrm/Makefile b/net/xfrm/Makefile index fbc4552d17b8..212a4fcb4a88 100644 --- a/net/xfrm/Makefile +++ b/net/xfrm/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_XFRM_ALGO) += xfrm_algo.o obj-$(CONFIG_XFRM_USER) += xfrm_user.o obj-$(CONFIG_XFRM_IPCOMP) += xfrm_ipcomp.o obj-$(CONFIG_XFRM_INTERFACE) += xfrm_interface.o +obj-$(CONFIG_INET_ESPINTCP) += espintcp.o diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c new file mode 100644 index 000000000000..f15d6a564b0e --- /dev/null +++ b/net/xfrm/espintcp.c @@ -0,0 +1,509 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include +#include + +static void handle_nonesp(struct espintcp_ctx *ctx, struct sk_buff *skb, + struct sock *sk) +{ + if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf || + !sk_rmem_schedule(sk, skb, skb->truesize)) { + kfree_skb(skb); + return; + } + + skb_set_owner_r(skb, sk); + + memset(skb->cb, 0, sizeof(skb->cb)); + skb_queue_tail(&ctx->ike_queue, skb); + ctx->saved_data_ready(sk); +} + +static void handle_esp(struct sk_buff *skb, struct sock *sk) +{ + skb_reset_transport_header(skb); + memset(skb->cb, 0, sizeof(skb->cb)); + + rcu_read_lock(); + skb->dev = dev_get_by_index_rcu(sock_net(sk), skb->skb_iif); + local_bh_disable(); + xfrm4_rcv_encap(skb, IPPROTO_ESP, 0, TCP_ENCAP_ESPINTCP); + local_bh_enable(); + rcu_read_unlock(); +} + +static void espintcp_rcv(struct strparser *strp, struct sk_buff *skb) +{ + struct espintcp_ctx *ctx = container_of(strp, struct espintcp_ctx, + strp); + struct strp_msg *rxm = strp_msg(skb); + u32 nonesp_marker; + int err; + + err = skb_copy_bits(skb, rxm->offset + 2, &nonesp_marker, + sizeof(nonesp_marker)); + if (err < 0) { + kfree_skb(skb); + return; + } + + /* remove header, leave non-ESP marker/SPI */ + if (!__pskb_pull(skb, rxm->offset + 2)) { + kfree_skb(skb); + return; + } + + if (pskb_trim(skb, rxm->full_len - 2) != 0) { + kfree_skb(skb); + return; + } + + if (nonesp_marker == 0) + handle_nonesp(ctx, skb, strp->sk); + else + handle_esp(skb, strp->sk); +} + +static int espintcp_parse(struct strparser *strp, struct sk_buff *skb) +{ + struct strp_msg *rxm = strp_msg(skb); + __be16 blen; + u16 len; + int err; + + if (skb->len < rxm->offset + 2) + return 0; + + err = skb_copy_bits(skb, rxm->offset, &blen, sizeof(blen)); + if (err < 0) + return err; + + len = be16_to_cpu(blen); + if (len < 6) + return -EINVAL; + + return len; +} + +static int espintcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, + int nonblock, int flags, int *addr_len) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct sk_buff *skb; + int err = 0; + int copied; + int off = 0; + + flags |= nonblock ? MSG_DONTWAIT : 0; + + skb = __skb_recv_datagram(sk, &ctx->ike_queue, flags, NULL, &off, &err); + if (!skb) + return err; + + copied = len; + if (copied > skb->len) + copied = skb->len; + else if (copied < skb->len) + msg->msg_flags |= MSG_TRUNC; + + err = skb_copy_datagram_msg(skb, 0, msg, copied); + if (unlikely(err)) { + kfree_skb(skb); + return err; + } + + if (flags & MSG_TRUNC) + copied = skb->len; + kfree_skb(skb); + return copied; +} + +int espintcp_queue_out(struct sock *sk, struct sk_buff *skb) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + + if (skb_queue_len(&ctx->out_queue) >= netdev_max_backlog) + return -ENOBUFS; + + __skb_queue_tail(&ctx->out_queue, skb); + + return 0; +} +EXPORT_SYMBOL_GPL(espintcp_queue_out); + +/* espintcp length field is 2B and length includes the length field's size */ +#define MAX_ESPINTCP_MSG (((1 << 16) - 1) - 2) + +static int espintcp_sendskb_locked(struct sock *sk, struct espintcp_msg *emsg, + int flags) +{ + do { + int ret; + + ret = skb_send_sock_locked(sk, emsg->skb, + emsg->offset, emsg->len); + if (ret < 0) + return ret; + + emsg->len -= ret; + emsg->offset += ret; + } while (emsg->len > 0); + + kfree_skb(emsg->skb); + memset(emsg, 0, sizeof(*emsg)); + + return 0; +} + +static int espintcp_sendskmsg_locked(struct sock *sk, + struct espintcp_msg *emsg, int flags) +{ + struct sk_msg *skmsg = &emsg->skmsg; + struct scatterlist *sg; + int done = 0; + int ret; + + flags |= MSG_SENDPAGE_NOTLAST; + sg = &skmsg->sg.data[skmsg->sg.start]; + do { + size_t size = sg->length - emsg->offset; + int offset = sg->offset + emsg->offset; + struct page *p; + + emsg->offset = 0; + + if (sg_is_last(sg)) + flags &= ~MSG_SENDPAGE_NOTLAST; + + p = sg_page(sg); +retry: + ret = do_tcp_sendpages(sk, p, offset, size, flags); + if (ret < 0) { + emsg->offset = offset - sg->offset; + skmsg->sg.start += done; + return ret; + } + + if (ret != size) { + offset += ret; + size -= ret; + goto retry; + } + + done++; + put_page(p); + sk_mem_uncharge(sk, sg->length); + sg = sg_next(sg); + } while (sg); + + memset(emsg, 0, sizeof(*emsg)); + + return 0; +} + +static int espintcp_push_msgs(struct sock *sk) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct espintcp_msg *emsg = &ctx->partial; + int err; + + if (!emsg->len) + return 0; + + if (ctx->tx_running) + return -EAGAIN; + ctx->tx_running = 1; + + if (emsg->skb) + err = espintcp_sendskb_locked(sk, emsg, 0); + else + err = espintcp_sendskmsg_locked(sk, emsg, 0); + if (err == -EAGAIN) { + ctx->tx_running = 0; + return 0; + } + if (!err) + memset(emsg, 0, sizeof(*emsg)); + + ctx->tx_running = 0; + + return err; +} + +int espintcp_push_skb(struct sock *sk, struct sk_buff *skb) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct espintcp_msg *emsg = &ctx->partial; + unsigned int len; + int offset; + + if (sk->sk_state != TCP_ESTABLISHED) { + kfree_skb(skb); + return -ECONNRESET; + } + + offset = skb_transport_offset(skb); + len = skb->len - offset; + + espintcp_push_msgs(sk); + + if (emsg->len) { + kfree_skb(skb); + return -ENOBUFS; + } + + skb_set_owner_w(skb, sk); + + emsg->offset = offset; + emsg->len = len; + emsg->skb = skb; + + espintcp_push_msgs(sk); + + return 0; +} +EXPORT_SYMBOL_GPL(espintcp_push_skb); + +static int espintcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) +{ + long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct espintcp_msg *emsg = &ctx->partial; + struct iov_iter pfx_iter; + struct kvec pfx_iov = {}; + size_t msglen = size + 2; + char buf[2] = {0}; + int err, end; + + if (msg->msg_flags) + return -EOPNOTSUPP; + + if (size > MAX_ESPINTCP_MSG) + return -EMSGSIZE; + + if (msg->msg_controllen) + return -EOPNOTSUPP; + + lock_sock(sk); + + err = espintcp_push_msgs(sk); + if (err < 0) { + err = -ENOBUFS; + goto unlock; + } + + sk_msg_init(&emsg->skmsg); + while (1) { + /* only -ENOMEM is possible since we don't coalesce */ + err = sk_msg_alloc(sk, &emsg->skmsg, msglen, 0); + if (!err) + break; + + err = sk_stream_wait_memory(sk, &timeo); + if (err) + goto fail; + } + + *((__be16 *)buf) = cpu_to_be16(msglen); + pfx_iov.iov_base = buf; + pfx_iov.iov_len = sizeof(buf); + iov_iter_kvec(&pfx_iter, WRITE, &pfx_iov, 1, pfx_iov.iov_len); + + err = sk_msg_memcopy_from_iter(sk, &pfx_iter, &emsg->skmsg, + pfx_iov.iov_len); + if (err < 0) + goto fail; + + err = sk_msg_memcopy_from_iter(sk, &msg->msg_iter, &emsg->skmsg, size); + if (err < 0) + goto fail; + + end = emsg->skmsg.sg.end; + emsg->len = size; + sk_msg_iter_var_prev(end); + sg_mark_end(sk_msg_elem(&emsg->skmsg, end)); + + tcp_rate_check_app_limited(sk); + + err = espintcp_push_msgs(sk); + /* this message could be partially sent, keep it */ + if (err < 0) + goto unlock; + release_sock(sk); + + return size; + +fail: + sk_msg_free(sk, &emsg->skmsg); + memset(emsg, 0, sizeof(*emsg)); +unlock: + release_sock(sk); + return err; +} + +static struct proto espintcp_prot __ro_after_init; +static struct proto_ops espintcp_ops __ro_after_init; + +static void espintcp_data_ready(struct sock *sk) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + + strp_data_ready(&ctx->strp); +} + +static void espintcp_tx_work(struct work_struct *work) +{ + struct espintcp_ctx *ctx = container_of(work, + struct espintcp_ctx, work); + struct sock *sk = ctx->strp.sk; + + lock_sock(sk); + if (!ctx->tx_running) + espintcp_push_msgs(sk); + release_sock(sk); +} + +static void espintcp_write_space(struct sock *sk) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + + schedule_work(&ctx->work); + ctx->saved_write_space(sk); +} + +static void espintcp_destruct(struct sock *sk) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + + kfree(ctx); +} + +bool tcp_is_ulp_esp(struct sock *sk) +{ + return sk->sk_prot == &espintcp_prot; +} +EXPORT_SYMBOL_GPL(tcp_is_ulp_esp); + +static int espintcp_init_sk(struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + struct strp_callbacks cb = { + .rcv_msg = espintcp_rcv, + .parse_msg = espintcp_parse, + }; + struct espintcp_ctx *ctx; + int err; + + /* sockmap is not compatible with espintcp */ + if (sk->sk_user_data) + return -EBUSY; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + err = strp_init(&ctx->strp, sk, &cb); + if (err) + goto free; + + __sk_dst_reset(sk); + + strp_check_rcv(&ctx->strp); + skb_queue_head_init(&ctx->ike_queue); + skb_queue_head_init(&ctx->out_queue); + sk->sk_prot = &espintcp_prot; + sk->sk_socket->ops = &espintcp_ops; + ctx->saved_data_ready = sk->sk_data_ready; + ctx->saved_write_space = sk->sk_write_space; + sk->sk_data_ready = espintcp_data_ready; + sk->sk_write_space = espintcp_write_space; + sk->sk_destruct = espintcp_destruct; + rcu_assign_pointer(icsk->icsk_ulp_data, ctx); + INIT_WORK(&ctx->work, espintcp_tx_work); + + /* avoid using task_frag */ + sk->sk_allocation = GFP_ATOMIC; + + return 0; + +free: + kfree(ctx); + return err; +} + +static void espintcp_release(struct sock *sk) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct sk_buff_head queue; + struct sk_buff *skb; + + __skb_queue_head_init(&queue); + skb_queue_splice_init(&ctx->out_queue, &queue); + + while ((skb = __skb_dequeue(&queue))) + espintcp_push_skb(sk, skb); + + tcp_release_cb(sk); +} + +static void espintcp_close(struct sock *sk, long timeout) +{ + struct espintcp_ctx *ctx = espintcp_getctx(sk); + struct espintcp_msg *emsg = &ctx->partial; + + strp_stop(&ctx->strp); + + sk->sk_prot = &tcp_prot; + barrier(); + + cancel_work_sync(&ctx->work); + strp_done(&ctx->strp); + + skb_queue_purge(&ctx->out_queue); + skb_queue_purge(&ctx->ike_queue); + + if (emsg->len) { + if (emsg->skb) + kfree_skb(emsg->skb); + else + sk_msg_free(sk, &emsg->skmsg); + } + + tcp_close(sk, timeout); +} + +static __poll_t espintcp_poll(struct file *file, struct socket *sock, + poll_table *wait) +{ + __poll_t mask = datagram_poll(file, sock, wait); + struct sock *sk = sock->sk; + struct espintcp_ctx *ctx = espintcp_getctx(sk); + + if (!skb_queue_empty(&ctx->ike_queue)) + mask |= EPOLLIN | EPOLLRDNORM; + + return mask; +} + +static struct tcp_ulp_ops espintcp_ulp __read_mostly = { + .name = "espintcp", + .owner = THIS_MODULE, + .init = espintcp_init_sk, +}; + +void __init espintcp_init(void) +{ + memcpy(&espintcp_prot, &tcp_prot, sizeof(tcp_prot)); + memcpy(&espintcp_ops, &inet_stream_ops, sizeof(inet_stream_ops)); + espintcp_prot.sendmsg = espintcp_sendmsg; + espintcp_prot.recvmsg = espintcp_recvmsg; + espintcp_prot.close = espintcp_close; + espintcp_prot.release_cb = espintcp_release; + espintcp_ops.poll = espintcp_poll; + + tcp_register_ulp(&espintcp_ulp); +} diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index f2d1e573ea55..297d1eb79e5c 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -39,6 +39,9 @@ #ifdef CONFIG_XFRM_STATISTICS #include #endif +#ifdef CONFIG_INET_ESPINTCP +#include +#endif #include "xfrm_hash.h" @@ -4157,6 +4160,10 @@ void __init xfrm_init(void) seqcount_init(&xfrm_policy_hash_generation); xfrm_input_init(); +#ifdef CONFIG_INET_ESPINTCP + espintcp_init(); +#endif + RCU_INIT_POINTER(xfrm_if_cb, NULL); synchronize_rcu(); } diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index f3423562d933..170d6e7f31d3 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -670,6 +670,9 @@ int __xfrm_state_delete(struct xfrm_state *x) net->xfrm.state_num--; spin_unlock(&net->xfrm.xfrm_state_lock); + if (x->encap_sk) + sock_put(rcu_dereference_raw(x->encap_sk)); + xfrm_dev_state_delete(x); /* All xfrm_state objects are created by xfrm_state_alloc. -- cgit v1.2.3-71-gd317 From 65e6d90168f3593df0ae598502bcbf20d78ff0fb Mon Sep 17 00:00:00 2001 From: "Kevin(Yudong) Yang" Date: Mon, 9 Dec 2019 14:19:59 -0500 Subject: net-tcp: Disable TCP ssthresh metrics cache by default This patch introduces a sysctl knob "net.ipv4.tcp_no_ssthresh_metrics_save" that disables TCP ssthresh metrics cache by default. Other parts of TCP metrics cache, e.g. rtt, cwnd, remain unchanged. As modern networks becoming more and more dynamic, TCP metrics cache today often causes more harm than benefits. For example, the same IP address is often shared by different subscribers behind NAT in residential networks. Even if the IP address is not shared by different users, caching the slow-start threshold of a previous short flow using loss-based congestion control (e.g. cubic) often causes the future longer flows of the same network path to exit slow-start prematurely with abysmal throughput. Caching ssthresh is very risky and can lead to terrible performance. Therefore it makes sense to make disabling ssthresh caching by default and opt-in for specific networks by the administrators. This practice also has worked well for several years of deployment with CUBIC congestion control at Google. Acked-by: Eric Dumazet Acked-by: Neal Cardwell Acked-by: Yuchung Cheng Signed-off-by: Kevin(Yudong) Yang Signed-off-by: David S. Miller --- Documentation/networking/ip-sysctl.txt | 4 ++++ include/net/netns/ipv4.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 +++++++++ net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_metrics.c | 13 +++++++++---- 5 files changed, 24 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index fd26788e8c96..90b8f14bc41a 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -479,6 +479,10 @@ tcp_no_metrics_save - BOOLEAN degradation. If set, TCP will not cache metrics on closing connections. +tcp_no_ssthresh_metrics_save - BOOLEAN + Controls whether TCP saves ssthresh metrics in the route cache. + Default is 1, which disables ssthresh metrics. + tcp_orphan_retries - INTEGER This value influences the timeout of a locally closed TCP connection, when RTO retransmissions remain unacknowledged. diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h index c0c0791b1912..08b98414d94e 100644 --- a/include/net/netns/ipv4.h +++ b/include/net/netns/ipv4.h @@ -154,6 +154,7 @@ struct netns_ipv4 { int sysctl_tcp_adv_win_scale; int sysctl_tcp_frto; int sysctl_tcp_nometrics_save; + int sysctl_tcp_no_ssthresh_metrics_save; int sysctl_tcp_moderate_rcvbuf; int sysctl_tcp_tso_win_divisor; int sysctl_tcp_workaround_signed_windows; diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index fcb2cd167f64..9684af02e0a5 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -1192,6 +1192,15 @@ static struct ctl_table ipv4_net_table[] = { .mode = 0644, .proc_handler = proc_dointvec, }, + { + .procname = "tcp_no_ssthresh_metrics_save", + .data = &init_net.ipv4.sysctl_tcp_no_ssthresh_metrics_save, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, + }, { .procname = "tcp_moderate_rcvbuf", .data = &init_net.ipv4.sysctl_tcp_moderate_rcvbuf, diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 92282f98dc82..26637fce324d 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2674,6 +2674,7 @@ static int __net_init tcp_sk_init(struct net *net) net->ipv4.sysctl_tcp_fin_timeout = TCP_FIN_TIMEOUT; net->ipv4.sysctl_tcp_notsent_lowat = UINT_MAX; net->ipv4.sysctl_tcp_tw_reuse = 2; + net->ipv4.sysctl_tcp_no_ssthresh_metrics_save = 1; cnt = tcp_hashinfo.ehash_mask + 1; net->ipv4.tcp_death_row.sysctl_max_tw_buckets = cnt / 2; diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index c4848e7a0aad..279db8822439 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -385,7 +385,8 @@ void tcp_update_metrics(struct sock *sk) if (tcp_in_initial_slowstart(tp)) { /* Slow start still did not finish. */ - if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { + if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val && (tp->snd_cwnd >> 1) > val) tcp_metric_set(tm, TCP_METRIC_SSTHRESH, @@ -400,7 +401,8 @@ void tcp_update_metrics(struct sock *sk) } else if (!tcp_in_slow_start(tp) && icsk->icsk_ca_state == TCP_CA_Open) { /* Cong. avoidance phase, cwnd is reliable. */ - if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) + if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) tcp_metric_set(tm, TCP_METRIC_SSTHRESH, max(tp->snd_cwnd >> 1, tp->snd_ssthresh)); if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) { @@ -416,7 +418,8 @@ void tcp_update_metrics(struct sock *sk) tcp_metric_set(tm, TCP_METRIC_CWND, (val + tp->snd_ssthresh) >> 1); } - if (!tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { + if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val && tp->snd_ssthresh > val) tcp_metric_set(tm, TCP_METRIC_SSTHRESH, @@ -441,6 +444,7 @@ void tcp_init_metrics(struct sock *sk) { struct dst_entry *dst = __sk_dst_get(sk); struct tcp_sock *tp = tcp_sk(sk); + struct net *net = sock_net(sk); struct tcp_metrics_block *tm; u32 val, crtt = 0; /* cached RTT scaled by 8 */ @@ -458,7 +462,8 @@ void tcp_init_metrics(struct sock *sk) if (tcp_metric_locked(tm, TCP_METRIC_CWND)) tp->snd_cwnd_clamp = tcp_metric_get(tm, TCP_METRIC_CWND); - val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); + val = net->ipv4.sysctl_tcp_no_ssthresh_metrics_save ? + 0 : tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val) { tp->snd_ssthresh = val; if (tp->snd_ssthresh > tp->snd_cwnd_clamp) -- cgit v1.2.3-71-gd317 From bae141f54be83b06652c1d47e50e4e75ed4e9c7e Mon Sep 17 00:00:00 2001 From: Daniel Borkmann Date: Fri, 6 Dec 2019 22:49:34 +0100 Subject: bpf: Emit audit messages upon successful prog load and unload Allow for audit messages to be emitted upon BPF program load and unload for having a timeline of events. The load itself is in syscall context, so additional info about the process initiating the BPF prog creation can be logged and later directly correlated to the unload event. The only info really needed from BPF side is the globally unique prog ID where then audit user space tooling can query / dump all info needed about the specific BPF program right upon load event and enrich the record, thus these changes needed here can be kept small and non-intrusive to the core. Raw example output: # auditctl -D # auditctl -a always,exit -F arch=x86_64 -S bpf # ausearch --start recent -m 1334 ... ---- time->Wed Nov 27 16:04:13 2019 type=PROCTITLE msg=audit(1574867053.120:84664): proctitle="./bpf" type=SYSCALL msg=audit(1574867053.120:84664): arch=c000003e syscall=321 \ success=yes exit=3 a0=5 a1=7ffea484fbe0 a2=70 a3=0 items=0 ppid=7477 \ pid=12698 auid=1001 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 \ egid=1001 sgid=1001 fsgid=1001 tty=pts2 ses=4 comm="bpf" \ exe="/home/jolsa/auditd/audit-testsuite/tests/bpf/bpf" \ subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) type=UNKNOWN[1334] msg=audit(1574867053.120:84664): prog-id=76 op=LOAD ---- time->Wed Nov 27 16:04:13 2019 type=UNKNOWN[1334] msg=audit(1574867053.120:84665): prog-id=76 op=UNLOAD ... Signed-off-by: Daniel Borkmann Co-developed-by: Jiri Olsa Signed-off-by: Jiri Olsa Acked-by: Paul Moore Link: https://lore.kernel.org/bpf/20191206214934.11319-1-jolsa@kernel.org --- include/uapi/linux/audit.h | 1 + kernel/bpf/syscall.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h index 3ad935527177..a534d71e689a 100644 --- a/include/uapi/linux/audit.h +++ b/include/uapi/linux/audit.h @@ -116,6 +116,7 @@ #define AUDIT_FANOTIFY 1331 /* Fanotify access decision */ #define AUDIT_TIME_INJOFFSET 1332 /* Timekeeping offset injected */ #define AUDIT_TIME_ADJNTPVAL 1333 /* NTP value adjustment */ +#define AUDIT_BPF 1334 /* BPF subsystem */ #define AUDIT_AVC 1400 /* SE Linux avc denial or grant */ #define AUDIT_SELINUX_ERR 1401 /* Internal SE Linux Errors */ diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e3461ec59570..66b90eaf99fe 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ @@ -1306,6 +1307,36 @@ static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog) return 0; } +enum bpf_audit { + BPF_AUDIT_LOAD, + BPF_AUDIT_UNLOAD, + BPF_AUDIT_MAX, +}; + +static const char * const bpf_audit_str[BPF_AUDIT_MAX] = { + [BPF_AUDIT_LOAD] = "LOAD", + [BPF_AUDIT_UNLOAD] = "UNLOAD", +}; + +static void bpf_audit_prog(const struct bpf_prog *prog, unsigned int op) +{ + struct audit_context *ctx = NULL; + struct audit_buffer *ab; + + if (WARN_ON_ONCE(op >= BPF_AUDIT_MAX)) + return; + if (audit_enabled == AUDIT_OFF) + return; + if (op == BPF_AUDIT_LOAD) + ctx = audit_context(); + ab = audit_log_start(ctx, GFP_ATOMIC, AUDIT_BPF); + if (unlikely(!ab)) + return; + audit_log_format(ab, "prog-id=%u op=%s", + prog->aux->id, bpf_audit_str[op]); + audit_log_end(ab); +} + int __bpf_prog_charge(struct user_struct *user, u32 pages) { unsigned long memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; @@ -1421,6 +1452,7 @@ static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock) { if (atomic64_dec_and_test(&prog->aux->refcnt)) { perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0); + bpf_audit_prog(prog, BPF_AUDIT_UNLOAD); /* bpf_prog_free_id() must be called first */ bpf_prog_free_id(prog, do_idr_lock); __bpf_prog_put_noref(prog, true); @@ -1830,6 +1862,7 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) */ bpf_prog_kallsyms_add(prog); perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_LOAD, 0); + bpf_audit_prog(prog, BPF_AUDIT_LOAD); err = bpf_prog_new_fd(prog); if (err < 0) -- cgit v1.2.3-71-gd317 From a4516c7053b96fed98a0439a9226983b5275474b Mon Sep 17 00:00:00 2001 From: Russell King Date: Wed, 11 Dec 2019 10:55:59 +0000 Subject: net: sfp: derive interface mode from ethtool link modes We don't need the EEPROM ID to derive the phy interface mode as we can derive it merely from the ethtool link modes. Remove the EEPROM ID argument to sfp_select_interface(). Reviewed-by: Andrew Lunn Signed-off-by: Russell King Signed-off-by: David S. Miller --- drivers/net/phy/marvell10g.c | 2 +- drivers/net/phy/phylink.c | 2 +- drivers/net/phy/sfp-bus.c | 11 ++++------- include/linux/sfp.h | 2 -- 4 files changed, 6 insertions(+), 11 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c index 1bf13017d288..512f27b0b5cd 100644 --- a/drivers/net/phy/marvell10g.c +++ b/drivers/net/phy/marvell10g.c @@ -214,7 +214,7 @@ static int mv3310_sfp_insert(void *upstream, const struct sfp_eeprom_id *id) phy_interface_t iface; sfp_parse_support(phydev->sfp_bus, id, support); - iface = sfp_select_interface(phydev->sfp_bus, id, support); + iface = sfp_select_interface(phydev->sfp_bus, support); if (iface != PHY_INTERFACE_MODE_10GKR) { dev_err(&phydev->mdio.dev, "incompatible SFP module inserted\n"); diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 9a616d6bc4eb..b3e4d9bf8113 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -1714,7 +1714,7 @@ static int phylink_sfp_module_insert(void *upstream, linkmode_copy(support1, support); - iface = sfp_select_interface(pl->sfp_bus, id, config.advertising); + iface = sfp_select_interface(pl->sfp_bus, config.advertising); if (iface == PHY_INTERFACE_MODE_NA) { phylink_err(pl, "selection of interface failed, advertisement %*pb\n", diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c index 02ab07624c89..1561962fda30 100644 --- a/drivers/net/phy/sfp-bus.c +++ b/drivers/net/phy/sfp-bus.c @@ -320,16 +320,12 @@ EXPORT_SYMBOL_GPL(sfp_parse_support); /** * sfp_select_interface() - Select appropriate phy_interface_t mode * @bus: a pointer to the &struct sfp_bus structure for the sfp module - * @id: a pointer to the module's &struct sfp_eeprom_id * @link_modes: ethtool link modes mask * - * Derive the phy_interface_t mode for the information found in the - * module's identifying EEPROM and the link modes mask. There is no - * standard or defined way to derive this information, so we decide - * based upon the link mode mask. + * Derive the phy_interface_t mode for the SFP module from the link + * modes mask. */ phy_interface_t sfp_select_interface(struct sfp_bus *bus, - const struct sfp_eeprom_id *id, unsigned long *link_modes) { if (phylink_test(link_modes, 10000baseCR_Full) || @@ -342,7 +338,8 @@ phy_interface_t sfp_select_interface(struct sfp_bus *bus, if (phylink_test(link_modes, 2500baseX_Full)) return PHY_INTERFACE_MODE_2500BASEX; - if (id->base.e1000_base_t) + if (phylink_test(link_modes, 1000baseT_Half) || + phylink_test(link_modes, 1000baseT_Full)) return PHY_INTERFACE_MODE_SGMII; if (phylink_test(link_modes, 1000baseX_Full)) diff --git a/include/linux/sfp.h b/include/linux/sfp.h index 487fd9412d10..8d7b98c214d7 100644 --- a/include/linux/sfp.h +++ b/include/linux/sfp.h @@ -504,7 +504,6 @@ int sfp_parse_port(struct sfp_bus *bus, const struct sfp_eeprom_id *id, void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, unsigned long *support); phy_interface_t sfp_select_interface(struct sfp_bus *bus, - const struct sfp_eeprom_id *id, unsigned long *link_modes); int sfp_get_module_info(struct sfp_bus *bus, struct ethtool_modinfo *modinfo); @@ -532,7 +531,6 @@ static inline void sfp_parse_support(struct sfp_bus *bus, } static inline phy_interface_t sfp_select_interface(struct sfp_bus *bus, - const struct sfp_eeprom_id *id, unsigned long *link_modes) { return PHY_INTERFACE_MODE_NA; -- cgit v1.2.3-71-gd317 From 0fbd26a9fb6875b98fcfff523831fec47bc5e9a2 Mon Sep 17 00:00:00 2001 From: Russell King Date: Wed, 11 Dec 2019 10:56:04 +0000 Subject: net: sfp: add more extended compliance codes SFF-8024 is used to define various constants re-used in several SFF SFP-related specifications. Split these constants from the enum, and rename them to indicate that they're defined by SFF-8024. Add and use updated SFF-8024 extended compliance code definitions for 10GBASE-T, 5GBASE-T and 2.5GBASE-T modules. Reviewed-by: Andrew Lunn Signed-off-by: Russell King Signed-off-by: David S. Miller --- drivers/net/phy/sfp-bus.c | 60 ++++++++++++++++++++-------------- drivers/net/phy/sfp.c | 4 +-- include/linux/sfp.h | 82 +++++++++++++++++++++++++++++++---------------- 3 files changed, 93 insertions(+), 53 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c index 1561962fda30..c6627f1e5d68 100644 --- a/drivers/net/phy/sfp-bus.c +++ b/drivers/net/phy/sfp-bus.c @@ -124,35 +124,35 @@ int sfp_parse_port(struct sfp_bus *bus, const struct sfp_eeprom_id *id, /* port is the physical connector, set this from the connector field. */ switch (id->base.connector) { - case SFP_CONNECTOR_SC: - case SFP_CONNECTOR_FIBERJACK: - case SFP_CONNECTOR_LC: - case SFP_CONNECTOR_MT_RJ: - case SFP_CONNECTOR_MU: - case SFP_CONNECTOR_OPTICAL_PIGTAIL: + case SFF8024_CONNECTOR_SC: + case SFF8024_CONNECTOR_FIBERJACK: + case SFF8024_CONNECTOR_LC: + case SFF8024_CONNECTOR_MT_RJ: + case SFF8024_CONNECTOR_MU: + case SFF8024_CONNECTOR_OPTICAL_PIGTAIL: + case SFF8024_CONNECTOR_MPO_1X12: + case SFF8024_CONNECTOR_MPO_2X16: port = PORT_FIBRE; break; - case SFP_CONNECTOR_RJ45: + case SFF8024_CONNECTOR_RJ45: port = PORT_TP; break; - case SFP_CONNECTOR_COPPER_PIGTAIL: + case SFF8024_CONNECTOR_COPPER_PIGTAIL: port = PORT_DA; break; - case SFP_CONNECTOR_UNSPEC: + case SFF8024_CONNECTOR_UNSPEC: if (id->base.e1000_base_t) { port = PORT_TP; break; } /* fallthrough */ - case SFP_CONNECTOR_SG: /* guess */ - case SFP_CONNECTOR_MPO_1X12: - case SFP_CONNECTOR_MPO_2X16: - case SFP_CONNECTOR_HSSDC_II: - case SFP_CONNECTOR_NOSEPARATE: - case SFP_CONNECTOR_MXC_2X16: + case SFF8024_CONNECTOR_SG: /* guess */ + case SFF8024_CONNECTOR_HSSDC_II: + case SFF8024_CONNECTOR_NOSEPARATE: + case SFF8024_CONNECTOR_MXC_2X16: port = PORT_OTHER; break; default: @@ -261,22 +261,33 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, } switch (id->base.extended_cc) { - case 0x00: /* Unspecified */ + case SFF8024_ECC_UNSPEC: break; - case 0x02: /* 100Gbase-SR4 or 25Gbase-SR */ + case SFF8024_ECC_100GBASE_SR4_25GBASE_SR: phylink_set(modes, 100000baseSR4_Full); phylink_set(modes, 25000baseSR_Full); break; - case 0x03: /* 100Gbase-LR4 or 25Gbase-LR */ - case 0x04: /* 100Gbase-ER4 or 25Gbase-ER */ + case SFF8024_ECC_100GBASE_LR4_25GBASE_LR: + case SFF8024_ECC_100GBASE_ER4_25GBASE_ER: phylink_set(modes, 100000baseLR4_ER4_Full); break; - case 0x0b: /* 100Gbase-CR4 or 25Gbase-CR CA-L */ - case 0x0c: /* 25Gbase-CR CA-S */ - case 0x0d: /* 25Gbase-CR CA-N */ + case SFF8024_ECC_100GBASE_CR4: phylink_set(modes, 100000baseCR4_Full); + /* fallthrough */ + case SFF8024_ECC_25GBASE_CR_S: + case SFF8024_ECC_25GBASE_CR_N: phylink_set(modes, 25000baseCR_Full); break; + case SFF8024_ECC_10GBASE_T_SFI: + case SFF8024_ECC_10GBASE_T_SR: + phylink_set(modes, 10000baseT_Full); + break; + case SFF8024_ECC_5GBASE_T: + phylink_set(modes, 5000baseT_Full); + break; + case SFF8024_ECC_2_5GBASE_T: + phylink_set(modes, 2500baseT_Full); + break; default: dev_warn(bus->sfp_dev, "Unknown/unsupported extended compliance code: 0x%02x\n", @@ -301,7 +312,7 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, */ if (bitmap_empty(modes, __ETHTOOL_LINK_MODE_MASK_NBITS)) { /* If the encoding and bit rate allows 1000baseX */ - if (id->base.encoding == SFP_ENCODING_8B10B && br_nom && + if (id->base.encoding == SFF8024_ENCODING_8B10B && br_nom && br_min <= 1300 && br_max >= 1200) phylink_set(modes, 1000baseX_Full); } @@ -332,7 +343,8 @@ phy_interface_t sfp_select_interface(struct sfp_bus *bus, phylink_test(link_modes, 10000baseSR_Full) || phylink_test(link_modes, 10000baseLR_Full) || phylink_test(link_modes, 10000baseLRM_Full) || - phylink_test(link_modes, 10000baseER_Full)) + phylink_test(link_modes, 10000baseER_Full) || + phylink_test(link_modes, 10000baseT_Full)) return PHY_INTERFACE_MODE_10GKR; if (phylink_test(link_modes, 2500baseX_Full)) diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c index ae6a52a19458..ad3808307dba 100644 --- a/drivers/net/phy/sfp.c +++ b/drivers/net/phy/sfp.c @@ -242,7 +242,7 @@ struct sfp { static bool sff_module_supported(const struct sfp_eeprom_id *id) { - return id->base.phys_id == SFP_PHYS_ID_SFF && + return id->base.phys_id == SFF8024_ID_SFF_8472 && id->base.phys_ext_id == SFP_PHYS_EXT_ID_SFP; } @@ -253,7 +253,7 @@ static const struct sff_data sff_data = { static bool sfp_module_supported(const struct sfp_eeprom_id *id) { - return id->base.phys_id == SFP_PHYS_ID_SFP && + return id->base.phys_id == SFF8024_ID_SFP && id->base.phys_ext_id == SFP_PHYS_EXT_ID_SFP; } diff --git a/include/linux/sfp.h b/include/linux/sfp.h index 8d7b98c214d7..373d8b67ea86 100644 --- a/include/linux/sfp.h +++ b/include/linux/sfp.h @@ -275,6 +275,61 @@ struct sfp_diag { __be16 cal_v_offset; } __packed; +/* SFF8024 defined constants */ +enum { + SFF8024_ID_UNK = 0x00, + SFF8024_ID_SFF_8472 = 0x02, + SFF8024_ID_SFP = 0x03, + SFF8024_ID_DWDM_SFP = 0x0b, + SFF8024_ID_QSFP_8438 = 0x0c, + SFF8024_ID_QSFP_8436_8636 = 0x0d, + SFF8024_ID_QSFP28_8636 = 0x11, + + SFF8024_ENCODING_UNSPEC = 0x00, + SFF8024_ENCODING_8B10B = 0x01, + SFF8024_ENCODING_4B5B = 0x02, + SFF8024_ENCODING_NRZ = 0x03, + SFF8024_ENCODING_8472_MANCHESTER= 0x04, + SFF8024_ENCODING_8472_SONET = 0x05, + SFF8024_ENCODING_8472_64B66B = 0x06, + SFF8024_ENCODING_8436_MANCHESTER= 0x06, + SFF8024_ENCODING_8436_SONET = 0x04, + SFF8024_ENCODING_8436_64B66B = 0x05, + SFF8024_ENCODING_256B257B = 0x07, + SFF8024_ENCODING_PAM4 = 0x08, + + SFF8024_CONNECTOR_UNSPEC = 0x00, + /* codes 01-05 not supportable on SFP, but some modules have single SC */ + SFF8024_CONNECTOR_SC = 0x01, + SFF8024_CONNECTOR_FIBERJACK = 0x06, + SFF8024_CONNECTOR_LC = 0x07, + SFF8024_CONNECTOR_MT_RJ = 0x08, + SFF8024_CONNECTOR_MU = 0x09, + SFF8024_CONNECTOR_SG = 0x0a, + SFF8024_CONNECTOR_OPTICAL_PIGTAIL= 0x0b, + SFF8024_CONNECTOR_MPO_1X12 = 0x0c, + SFF8024_CONNECTOR_MPO_2X16 = 0x0d, + SFF8024_CONNECTOR_HSSDC_II = 0x20, + SFF8024_CONNECTOR_COPPER_PIGTAIL= 0x21, + SFF8024_CONNECTOR_RJ45 = 0x22, + SFF8024_CONNECTOR_NOSEPARATE = 0x23, + SFF8024_CONNECTOR_MXC_2X16 = 0x24, + + SFF8024_ECC_UNSPEC = 0x00, + SFF8024_ECC_100G_25GAUI_C2M_AOC = 0x01, + SFF8024_ECC_100GBASE_SR4_25GBASE_SR = 0x02, + SFF8024_ECC_100GBASE_LR4_25GBASE_LR = 0x03, + SFF8024_ECC_100GBASE_ER4_25GBASE_ER = 0x04, + SFF8024_ECC_100GBASE_SR10 = 0x05, + SFF8024_ECC_100GBASE_CR4 = 0x0b, + SFF8024_ECC_25GBASE_CR_S = 0x0c, + SFF8024_ECC_25GBASE_CR_N = 0x0d, + SFF8024_ECC_10GBASE_T_SFI = 0x16, + SFF8024_ECC_10GBASE_T_SR = 0x1c, + SFF8024_ECC_5GBASE_T = 0x1d, + SFF8024_ECC_2_5GBASE_T = 0x1e, +}; + /* SFP EEPROM registers */ enum { SFP_PHYS_ID = 0x00, @@ -309,34 +364,7 @@ enum { SFP_SFF8472_COMPLIANCE = 0x5e, SFP_CC_EXT = 0x5f, - SFP_PHYS_ID_SFF = 0x02, - SFP_PHYS_ID_SFP = 0x03, SFP_PHYS_EXT_ID_SFP = 0x04, - SFP_CONNECTOR_UNSPEC = 0x00, - /* codes 01-05 not supportable on SFP, but some modules have single SC */ - SFP_CONNECTOR_SC = 0x01, - SFP_CONNECTOR_FIBERJACK = 0x06, - SFP_CONNECTOR_LC = 0x07, - SFP_CONNECTOR_MT_RJ = 0x08, - SFP_CONNECTOR_MU = 0x09, - SFP_CONNECTOR_SG = 0x0a, - SFP_CONNECTOR_OPTICAL_PIGTAIL = 0x0b, - SFP_CONNECTOR_MPO_1X12 = 0x0c, - SFP_CONNECTOR_MPO_2X16 = 0x0d, - SFP_CONNECTOR_HSSDC_II = 0x20, - SFP_CONNECTOR_COPPER_PIGTAIL = 0x21, - SFP_CONNECTOR_RJ45 = 0x22, - SFP_CONNECTOR_NOSEPARATE = 0x23, - SFP_CONNECTOR_MXC_2X16 = 0x24, - SFP_ENCODING_UNSPEC = 0x00, - SFP_ENCODING_8B10B = 0x01, - SFP_ENCODING_4B5B = 0x02, - SFP_ENCODING_NRZ = 0x03, - SFP_ENCODING_8472_MANCHESTER = 0x04, - SFP_ENCODING_8472_SONET = 0x05, - SFP_ENCODING_8472_64B66B = 0x06, - SFP_ENCODING_256B257B = 0x07, - SFP_ENCODING_PAM4 = 0x08, SFP_OPTIONS_HIGH_POWER_LEVEL = BIT(13), SFP_OPTIONS_PAGING_A2 = BIT(12), SFP_OPTIONS_RETIMER = BIT(11), -- cgit v1.2.3-71-gd317 From 74c551ca5a0edcc9cf66a3b73fd95b9a8615bfd0 Mon Sep 17 00:00:00 2001 From: Russell King Date: Wed, 11 Dec 2019 10:56:09 +0000 Subject: net: sfp: add module start/stop upstream notifications When dealing with some copper modules, we can't positively know the module capabilities are until we have probed the PHY. Without the full capabilities, we may end up failing a module that we could otherwise drive with a restricted set of capabilities. An example of this would be a module with a NBASE-T PHY plugged into a host that supports phy interface modes 2500BASE-X and SGMII. The PHY supports 10GBASE-R, 5000BASE-X, 2500BASE-X, SGMII interface modes, which means a subset of the capabilities are compatible with the host. However, reading the module EEPROM leads us to believe that the module only supports ethtool link mode 10GBASE-T, which is incompatible with the host - and thus results in the module being rejected. This patch adds an extra notification which are triggered after the SFP module's PHY probe, and a corresponding notification just before the PHY is removed. Reviewed-by: Andrew Lunn Signed-off-by: Russell King Signed-off-by: David S. Miller --- drivers/net/phy/sfp-bus.c | 21 +++++++++++++++++++++ drivers/net/phy/sfp.c | 8 ++++++++ drivers/net/phy/sfp.h | 2 ++ include/linux/sfp.h | 4 ++++ 4 files changed, 35 insertions(+) (limited to 'include') diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c index c6627f1e5d68..eabc9e3f0a9e 100644 --- a/drivers/net/phy/sfp-bus.c +++ b/drivers/net/phy/sfp-bus.c @@ -712,6 +712,27 @@ void sfp_module_remove(struct sfp_bus *bus) } EXPORT_SYMBOL_GPL(sfp_module_remove); +int sfp_module_start(struct sfp_bus *bus) +{ + const struct sfp_upstream_ops *ops = sfp_get_upstream_ops(bus); + int ret = 0; + + if (ops && ops->module_start) + ret = ops->module_start(bus->upstream); + + return ret; +} +EXPORT_SYMBOL_GPL(sfp_module_start); + +void sfp_module_stop(struct sfp_bus *bus) +{ + const struct sfp_upstream_ops *ops = sfp_get_upstream_ops(bus); + + if (ops && ops->module_stop) + ops->module_stop(bus->upstream); +} +EXPORT_SYMBOL_GPL(sfp_module_stop); + static void sfp_socket_clear(struct sfp_bus *bus) { bus->sfp_dev = NULL; diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c index ad3808307dba..23f30dac0f17 100644 --- a/drivers/net/phy/sfp.c +++ b/drivers/net/phy/sfp.c @@ -59,6 +59,7 @@ enum { SFP_DEV_UP, SFP_S_DOWN = 0, + SFP_S_FAIL, SFP_S_WAIT, SFP_S_INIT, SFP_S_INIT_TX_FAULT, @@ -122,6 +123,7 @@ static const char *event_to_str(unsigned short event) static const char * const sm_state_strings[] = { [SFP_S_DOWN] = "down", + [SFP_S_FAIL] = "fail", [SFP_S_WAIT] = "wait", [SFP_S_INIT] = "init", [SFP_S_INIT_TX_FAULT] = "init_tx_fault", @@ -1826,6 +1828,8 @@ static void sfp_sm_main(struct sfp *sfp, unsigned int event) if (sfp->sm_state == SFP_S_LINK_UP && sfp->sm_dev_state == SFP_DEV_UP) sfp_sm_link_down(sfp); + if (sfp->sm_state > SFP_S_INIT) + sfp_module_stop(sfp->sfp_bus); if (sfp->mod_phy) sfp_sm_phy_detach(sfp); sfp_module_tx_disable(sfp); @@ -1893,6 +1897,10 @@ static void sfp_sm_main(struct sfp *sfp, unsigned int event) * clear. Probe for the PHY and check the LOS state. */ sfp_sm_probe_for_phy(sfp); + if (sfp_module_start(sfp->sfp_bus)) { + sfp_sm_next(sfp, SFP_S_FAIL, 0); + break; + } sfp_sm_link_check_los(sfp); /* Reset the fault retry count */ diff --git a/drivers/net/phy/sfp.h b/drivers/net/phy/sfp.h index 64f54b0bbd8c..b83f70526270 100644 --- a/drivers/net/phy/sfp.h +++ b/drivers/net/phy/sfp.h @@ -22,6 +22,8 @@ void sfp_link_up(struct sfp_bus *bus); void sfp_link_down(struct sfp_bus *bus); int sfp_module_insert(struct sfp_bus *bus, const struct sfp_eeprom_id *id); void sfp_module_remove(struct sfp_bus *bus); +int sfp_module_start(struct sfp_bus *bus); +void sfp_module_stop(struct sfp_bus *bus); int sfp_link_configure(struct sfp_bus *bus, const struct sfp_eeprom_id *id); struct sfp_bus *sfp_register_socket(struct device *dev, struct sfp *sfp, const struct sfp_socket_ops *ops); diff --git a/include/linux/sfp.h b/include/linux/sfp.h index 373d8b67ea86..66a56396e8e3 100644 --- a/include/linux/sfp.h +++ b/include/linux/sfp.h @@ -507,6 +507,8 @@ struct sfp_bus; * @module_insert: called after a module has been detected to determine * whether the module is supported for the upstream device. * @module_remove: called after the module has been removed. + * @module_start: called after the PHY probe step + * @module_stop: called before the PHY is removed * @link_down: called when the link is non-operational for whatever * reason. * @link_up: called when the link is operational. @@ -520,6 +522,8 @@ struct sfp_upstream_ops { void (*detach)(void *priv, struct sfp_bus *bus); int (*module_insert)(void *priv, const struct sfp_eeprom_id *id); void (*module_remove)(void *priv); + int (*module_start)(void *priv); + void (*module_stop)(void *priv); void (*link_down)(void *priv); void (*link_up)(void *priv); int (*connect_phy)(void *priv, struct phy_device *); -- cgit v1.2.3-71-gd317 From 52c956003a9d5bcae1f445f9dfd42b624adb6e87 Mon Sep 17 00:00:00 2001 From: Russell King Date: Wed, 11 Dec 2019 10:56:45 +0000 Subject: net: phylink: delay MAC configuration for copper SFP modules Knowing whether we need to delay the MAC configuration because a module may have a PHY is useful to phylink to allow NBASE-T modules to work on systems supporting no more than 2.5G speeds. This commit allows us to delay such configuration until after the PHY has been probed by recording the parsed capabilities, and if the module may have a PHY, doing no more until the module_start() notification is called. At that point, we either have a PHY, or we don't. We move the PHY-based setup a little later, and use the PHYs support capabilities rather than the EEPROM parsed capabilities to determine whether we can support the PHY. Reviewed-by: Andrew Lunn Signed-off-by: Russell King Signed-off-by: David S. Miller --- drivers/net/phy/phylink.c | 53 ++++++++++++++++++++++++++++++++++++++--------- drivers/net/phy/sfp-bus.c | 28 +++++++++++++++++++++++++ include/linux/sfp.h | 7 +++++++ 3 files changed, 78 insertions(+), 10 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 571521d928f2..01f958508f80 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -72,6 +72,9 @@ struct phylink { bool mac_link_dropped; struct sfp_bus *sfp_bus; + bool sfp_may_have_phy; + __ETHTOOL_DECLARE_LINK_MODE_MASK(sfp_support); + u8 sfp_port; }; #define phylink_printk(level, pl, fmt, ...) \ @@ -1684,7 +1687,7 @@ static void phylink_sfp_detach(void *upstream, struct sfp_bus *bus) pl->netdev->sfp_bus = NULL; } -static int phylink_sfp_config(struct phylink *pl, u8 mode, u8 port, +static int phylink_sfp_config(struct phylink *pl, u8 mode, const unsigned long *supported, const unsigned long *advertising) { @@ -1757,7 +1760,7 @@ static int phylink_sfp_config(struct phylink *pl, u8 mode, u8 port, phy_modes(config.interface)); } - pl->link_port = port; + pl->link_port = pl->sfp_port; if (changed && !test_bit(PHYLINK_DISABLE_STOPPED, &pl->phylink_disable_state)) @@ -1770,15 +1773,20 @@ static int phylink_sfp_module_insert(void *upstream, const struct sfp_eeprom_id *id) { struct phylink *pl = upstream; - __ETHTOOL_DECLARE_LINK_MODE_MASK(support) = { 0, }; - u8 port; + unsigned long *support = pl->sfp_support; ASSERT_RTNL(); + linkmode_zero(support); sfp_parse_support(pl->sfp_bus, id, support); - port = sfp_parse_port(pl->sfp_bus, id, support); + pl->sfp_port = sfp_parse_port(pl->sfp_bus, id, support); - return phylink_sfp_config(pl, MLO_AN_INBAND, port, support, support); + /* If this module may have a PHY connecting later, defer until later */ + pl->sfp_may_have_phy = sfp_may_have_phy(pl->sfp_bus, id); + if (pl->sfp_may_have_phy) + return 0; + + return phylink_sfp_config(pl, MLO_AN_INBAND, support, support); } static int phylink_sfp_module_start(void *upstream) @@ -1786,10 +1794,19 @@ static int phylink_sfp_module_start(void *upstream) struct phylink *pl = upstream; /* If this SFP module has a PHY, start the PHY now. */ - if (pl->phydev) + if (pl->phydev) { phy_start(pl->phydev); + return 0; + } - return 0; + /* If the module may have a PHY but we didn't detect one we + * need to configure the MAC here. + */ + if (!pl->sfp_may_have_phy) + return 0; + + return phylink_sfp_config(pl, MLO_AN_INBAND, + pl->sfp_support, pl->sfp_support); } static void phylink_sfp_module_stop(void *upstream) @@ -1823,10 +1840,26 @@ static void phylink_sfp_link_up(void *upstream) static int phylink_sfp_connect_phy(void *upstream, struct phy_device *phy) { struct phylink *pl = upstream; - phy_interface_t interface = pl->link_config.interface; + phy_interface_t interface; int ret; - ret = phylink_attach_phy(pl, phy, pl->link_config.interface); + /* + * This is the new way of dealing with flow control for PHYs, + * as described by Timur Tabi in commit 529ed1275263 ("net: phy: + * phy drivers should not set SUPPORTED_[Asym_]Pause") except + * using our validate call to the MAC, we rely upon the MAC + * clearing the bits from both supported and advertising fields. + */ + phy_support_asym_pause(phy); + + /* Do the initial configuration */ + ret = phylink_sfp_config(pl, MLO_AN_INBAND, phy->supported, + phy->advertising); + if (ret < 0) + return ret; + + interface = pl->link_config.interface; + ret = phylink_attach_phy(pl, phy, interface); if (ret < 0) return ret; diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c index eabc9e3f0a9e..06e6429b8b71 100644 --- a/drivers/net/phy/sfp-bus.c +++ b/drivers/net/phy/sfp-bus.c @@ -103,6 +103,7 @@ static const struct sfp_quirk *sfp_lookup_quirk(const struct sfp_eeprom_id *id) return NULL; } + /** * sfp_parse_port() - Parse the EEPROM base ID, setting the port type * @bus: a pointer to the &struct sfp_bus structure for the sfp module @@ -178,6 +179,33 @@ int sfp_parse_port(struct sfp_bus *bus, const struct sfp_eeprom_id *id, } EXPORT_SYMBOL_GPL(sfp_parse_port); +/** + * sfp_may_have_phy() - indicate whether the module may have a PHY + * @bus: a pointer to the &struct sfp_bus structure for the sfp module + * @id: a pointer to the module's &struct sfp_eeprom_id + * + * Parse the EEPROM identification given in @id, and return whether + * this module may have a PHY. + */ +bool sfp_may_have_phy(struct sfp_bus *bus, const struct sfp_eeprom_id *id) +{ + if (id->base.e1000_base_t) + return true; + + if (id->base.phys_id != SFF8024_ID_DWDM_SFP) { + switch (id->base.extended_cc) { + case SFF8024_ECC_10GBASE_T_SFI: + case SFF8024_ECC_10GBASE_T_SR: + case SFF8024_ECC_5GBASE_T: + case SFF8024_ECC_2_5GBASE_T: + return true; + } + } + + return false; +} +EXPORT_SYMBOL_GPL(sfp_may_have_phy); + /** * sfp_parse_support() - Parse the eeprom id for supported link modes * @bus: a pointer to the &struct sfp_bus structure for the sfp module diff --git a/include/linux/sfp.h b/include/linux/sfp.h index 66a56396e8e3..38893e4dd0f0 100644 --- a/include/linux/sfp.h +++ b/include/linux/sfp.h @@ -533,6 +533,7 @@ struct sfp_upstream_ops { #if IS_ENABLED(CONFIG_SFP) int sfp_parse_port(struct sfp_bus *bus, const struct sfp_eeprom_id *id, unsigned long *support); +bool sfp_may_have_phy(struct sfp_bus *bus, const struct sfp_eeprom_id *id); void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, unsigned long *support); phy_interface_t sfp_select_interface(struct sfp_bus *bus, @@ -556,6 +557,12 @@ static inline int sfp_parse_port(struct sfp_bus *bus, return PORT_OTHER; } +static inline bool sfp_may_have_phy(struct sfp_bus *bus, + const struct sfp_eeprom_id *id) +{ + return false; +} + static inline void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, unsigned long *support) -- cgit v1.2.3-71-gd317 From ef343b35d46667668a099655fca4a5b2e43a5dfe Mon Sep 17 00:00:00 2001 From: Stefano Garzarella Date: Tue, 10 Dec 2019 11:43:03 +0100 Subject: vsock: add VMADDR_CID_LOCAL definition The VMADDR_CID_RESERVED (1) was used by VMCI, but now it is not used anymore, so we can reuse it for local communication (loopback) adding the new well-know CID: VMADDR_CID_LOCAL. Cc: Jorgen Hansen Reviewed-by: Stefan Hajnoczi Reviewed-by: Jorgen Hansen Signed-off-by: Stefano Garzarella Signed-off-by: David S. Miller --- include/uapi/linux/vm_sockets.h | 8 +++++--- net/vmw_vsock/vmci_transport.c | 2 +- 2 files changed, 6 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/vm_sockets.h b/include/uapi/linux/vm_sockets.h index 68d57c5e99bc..fd0ed7221645 100644 --- a/include/uapi/linux/vm_sockets.h +++ b/include/uapi/linux/vm_sockets.h @@ -99,11 +99,13 @@ #define VMADDR_CID_HYPERVISOR 0 -/* This CID is specific to VMCI and can be considered reserved (even VMCI - * doesn't use it anymore, it's a legacy value from an older release). +/* Use this as the destination CID in an address when referring to the + * local communication (loopback). + * (This was VMADDR_CID_RESERVED, but even VMCI doesn't use it anymore, + * it was a legacy value from an older release). */ -#define VMADDR_CID_RESERVED 1 +#define VMADDR_CID_LOCAL 1 /* Use this as the destination CID in an address when referring to the host * (any process other than the hypervisor). VMCI relies on it being 2, but diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c index 644d32e43d23..4b8b1150a738 100644 --- a/net/vmw_vsock/vmci_transport.c +++ b/net/vmw_vsock/vmci_transport.c @@ -648,7 +648,7 @@ static int vmci_transport_recv_dgram_cb(void *data, struct vmci_datagram *dg) static bool vmci_transport_stream_allow(u32 cid, u32 port) { static const u32 non_socket_contexts[] = { - VMADDR_CID_RESERVED, + VMADDR_CID_LOCAL, }; int i; -- cgit v1.2.3-71-gd317 From 0e12190578d081d4fe54d41635ec6e5a6eb0d01a Mon Sep 17 00:00:00 2001 From: Stefano Garzarella Date: Tue, 10 Dec 2019 11:43:04 +0100 Subject: vsock: add local transport support in the vsock core This patch allows to register a transport able to handle local communication (loopback). Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefano Garzarella Signed-off-by: David S. Miller --- include/net/af_vsock.h | 2 ++ net/vmw_vsock/af_vsock.c | 17 ++++++++++++++++- 2 files changed, 18 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index 4206dc6d813f..b1c717286993 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -98,6 +98,8 @@ struct vsock_transport_send_notify_data { #define VSOCK_TRANSPORT_F_G2H 0x00000002 /* Transport provides DGRAM communication */ #define VSOCK_TRANSPORT_F_DGRAM 0x00000004 +/* Transport provides local (loopback) communication */ +#define VSOCK_TRANSPORT_F_LOCAL 0x00000008 struct vsock_transport { struct module *module; diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index 74db4cd637a7..3da0749a0c97 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -136,6 +136,8 @@ static const struct vsock_transport *transport_h2g; static const struct vsock_transport *transport_g2h; /* Transport used for DGRAM communication */ static const struct vsock_transport *transport_dgram; +/* Transport used for local communication */ +static const struct vsock_transport *transport_local; static DEFINE_MUTEX(vsock_register_mutex); /**** UTILS ****/ @@ -2137,7 +2139,7 @@ EXPORT_SYMBOL_GPL(vsock_core_get_transport); int vsock_core_register(const struct vsock_transport *t, int features) { - const struct vsock_transport *t_h2g, *t_g2h, *t_dgram; + const struct vsock_transport *t_h2g, *t_g2h, *t_dgram, *t_local; int err = mutex_lock_interruptible(&vsock_register_mutex); if (err) @@ -2146,6 +2148,7 @@ int vsock_core_register(const struct vsock_transport *t, int features) t_h2g = transport_h2g; t_g2h = transport_g2h; t_dgram = transport_dgram; + t_local = transport_local; if (features & VSOCK_TRANSPORT_F_H2G) { if (t_h2g) { @@ -2171,9 +2174,18 @@ int vsock_core_register(const struct vsock_transport *t, int features) t_dgram = t; } + if (features & VSOCK_TRANSPORT_F_LOCAL) { + if (t_local) { + err = -EBUSY; + goto err_busy; + } + t_local = t; + } + transport_h2g = t_h2g; transport_g2h = t_g2h; transport_dgram = t_dgram; + transport_local = t_local; err_busy: mutex_unlock(&vsock_register_mutex); @@ -2194,6 +2206,9 @@ void vsock_core_unregister(const struct vsock_transport *t) if (transport_dgram == t) transport_dgram = NULL; + if (transport_local == t) + transport_local = NULL; + mutex_unlock(&vsock_register_mutex); } EXPORT_SYMBOL_GPL(vsock_core_unregister); -- cgit v1.2.3-71-gd317 From b4653342b1514cb11f25b727c689451aff02996d Mon Sep 17 00:00:00 2001 From: Kirill Tkhai Date: Mon, 9 Dec 2019 13:03:40 +0300 Subject: net: Allow to show socket-specific information in /proc/[pid]/fdinfo/[fd] This adds .show_fdinfo to socket_file_ops, so protocols will be able to print their specific data in fdinfo. Signed-off-by: Kirill Tkhai Signed-off-by: David S. Miller --- include/linux/net.h | 1 + net/socket.c | 12 ++++++++++++ 2 files changed, 13 insertions(+) (limited to 'include') diff --git a/include/linux/net.h b/include/linux/net.h index 9cafb5f353a9..6451425e828f 100644 --- a/include/linux/net.h +++ b/include/linux/net.h @@ -171,6 +171,7 @@ struct proto_ops { int (*compat_getsockopt)(struct socket *sock, int level, int optname, char __user *optval, int __user *optlen); #endif + void (*show_fdinfo)(struct seq_file *m, struct socket *sock); int (*sendmsg) (struct socket *sock, struct msghdr *m, size_t total_len); /* Notes for implementing recvmsg: diff --git a/net/socket.c b/net/socket.c index 4d38d49d6ad9..532f123e0459 100644 --- a/net/socket.c +++ b/net/socket.c @@ -128,6 +128,7 @@ static ssize_t sock_sendpage(struct file *file, struct page *page, static ssize_t sock_splice_read(struct file *file, loff_t *ppos, struct pipe_inode_info *pipe, size_t len, unsigned int flags); +static void sock_show_fdinfo(struct seq_file *m, struct file *f); /* * Socket files have a set of 'special' operations as well as the generic file ones. These don't appear @@ -150,6 +151,9 @@ static const struct file_operations socket_file_ops = { .sendpage = sock_sendpage, .splice_write = generic_splice_sendpage, .splice_read = sock_splice_read, +#ifdef CONFIG_PROC_FS + .show_fdinfo = sock_show_fdinfo, +#endif }; /* @@ -993,6 +997,14 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from) return res; } +static void sock_show_fdinfo(struct seq_file *m, struct file *f) +{ + struct socket *sock = f->private_data; + + if (sock->ops->show_fdinfo) + sock->ops->show_fdinfo(m, sock); +} + /* * Atomic setting of ioctl hooks to avoid race * with module unload. -- cgit v1.2.3-71-gd317 From 3c32da19a858fb1ae8a76bf899160be49f338506 Mon Sep 17 00:00:00 2001 From: Kirill Tkhai Date: Mon, 9 Dec 2019 13:03:46 +0300 Subject: unix: Show number of pending scm files of receive queue in fdinfo Unix sockets like a block box. You never know what is stored there: there may be a file descriptor holding a mount or a block device, or there may be whole universes with namespaces, sockets with receive queues full of sockets etc. The patch adds a little debug and accounts number of files (not recursive), which is in receive queue of a unix socket. Sometimes this is useful to determine, that socket should be investigated or which task should be killed to put reference counter on a resourse. v2: Pass correct argument to lockdep Signed-off-by: Kirill Tkhai Signed-off-by: David S. Miller --- include/net/af_unix.h | 5 +++++ net/unix/af_unix.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 56 insertions(+), 5 deletions(-) (limited to 'include') diff --git a/include/net/af_unix.h b/include/net/af_unix.h index 3426d6dacc45..17e10fba2152 100644 --- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -41,6 +41,10 @@ struct unix_skb_parms { u32 consumed; } __randomize_layout; +struct scm_stat { + u32 nr_fds; +}; + #define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb)) #define unix_state_lock(s) spin_lock(&unix_sk(s)->lock) @@ -65,6 +69,7 @@ struct unix_sock { #define UNIX_GC_MAYBE_CYCLE 1 struct socket_wq peer_wq; wait_queue_entry_t peer_wake; + struct scm_stat scm_stat; }; static inline struct unix_sock *unix_sk(const struct sock *sk) diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 7cfdce10de36..050eed3b30dc 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -676,6 +676,16 @@ static int unix_set_peek_off(struct sock *sk, int val) return 0; } +static void unix_show_fdinfo(struct seq_file *m, struct socket *sock) +{ + struct sock *sk = sock->sk; + struct unix_sock *u; + + if (sk) { + u = unix_sk(sock->sk); + seq_printf(m, "scm_fds: %u\n", READ_ONCE(u->scm_stat.nr_fds)); + } +} static const struct proto_ops unix_stream_ops = { .family = PF_UNIX, @@ -701,6 +711,7 @@ static const struct proto_ops unix_stream_ops = { .sendpage = unix_stream_sendpage, .splice_read = unix_stream_splice_read, .set_peek_off = unix_set_peek_off, + .show_fdinfo = unix_show_fdinfo, }; static const struct proto_ops unix_dgram_ops = { @@ -726,6 +737,7 @@ static const struct proto_ops unix_dgram_ops = { .mmap = sock_no_mmap, .sendpage = sock_no_sendpage, .set_peek_off = unix_set_peek_off, + .show_fdinfo = unix_show_fdinfo, }; static const struct proto_ops unix_seqpacket_ops = { @@ -751,6 +763,7 @@ static const struct proto_ops unix_seqpacket_ops = { .mmap = sock_no_mmap, .sendpage = sock_no_sendpage, .set_peek_off = unix_set_peek_off, + .show_fdinfo = unix_show_fdinfo, }; static struct proto unix_proto = { @@ -788,6 +801,7 @@ static struct sock *unix_create1(struct net *net, struct socket *sock, int kern) mutex_init(&u->bindlock); /* single task binding lock */ init_waitqueue_head(&u->peer_wait); init_waitqueue_func_entry(&u->peer_wake, unix_dgram_peer_wake_relay); + memset(&u->scm_stat, 0, sizeof(struct scm_stat)); unix_insert_socket(unix_sockets_unbound(sk), sk); out: if (sk == NULL) @@ -1572,6 +1586,28 @@ static bool unix_skb_scm_eq(struct sk_buff *skb, unix_secdata_eq(scm, skb); } +static void scm_stat_add(struct sock *sk, struct sk_buff *skb) +{ + struct scm_fp_list *fp = UNIXCB(skb).fp; + struct unix_sock *u = unix_sk(sk); + + lockdep_assert_held(&sk->sk_receive_queue.lock); + + if (unlikely(fp && fp->count)) + u->scm_stat.nr_fds += fp->count; +} + +static void scm_stat_del(struct sock *sk, struct sk_buff *skb) +{ + struct scm_fp_list *fp = UNIXCB(skb).fp; + struct unix_sock *u = unix_sk(sk); + + lockdep_assert_held(&sk->sk_receive_queue.lock); + + if (unlikely(fp && fp->count)) + u->scm_stat.nr_fds -= fp->count; +} + /* * Send AF_UNIX data. */ @@ -1757,7 +1793,10 @@ restart_locked: if (sock_flag(other, SOCK_RCVTSTAMP)) __net_timestamp(skb); maybe_add_creds(skb, sock, other); - skb_queue_tail(&other->sk_receive_queue, skb); + spin_lock(&other->sk_receive_queue.lock); + scm_stat_add(other, skb); + __skb_queue_tail(&other->sk_receive_queue, skb); + spin_unlock(&other->sk_receive_queue.lock); unix_state_unlock(other); other->sk_data_ready(other); sock_put(other); @@ -1859,7 +1898,10 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg, goto pipe_err_free; maybe_add_creds(skb, sock, other); - skb_queue_tail(&other->sk_receive_queue, skb); + spin_lock(&other->sk_receive_queue.lock); + scm_stat_add(other, skb); + __skb_queue_tail(&other->sk_receive_queue, skb); + spin_unlock(&other->sk_receive_queue.lock); unix_state_unlock(other); other->sk_data_ready(other); sent += size; @@ -2058,8 +2100,8 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg, mutex_lock(&u->iolock); skip = sk_peek_offset(sk, flags); - skb = __skb_try_recv_datagram(sk, flags, NULL, &skip, &err, - &last); + skb = __skb_try_recv_datagram(sk, flags, scm_stat_del, + &skip, &err, &last); if (skb) break; @@ -2353,8 +2395,12 @@ unlock: sk_peek_offset_bwd(sk, chunk); - if (UNIXCB(skb).fp) + if (UNIXCB(skb).fp) { + spin_lock(&sk->sk_receive_queue.lock); + scm_stat_del(sk, skb); + spin_unlock(&sk->sk_receive_queue.lock); unix_detach_fds(&scm, skb); + } if (unix_skb_len(skb)) break; -- cgit v1.2.3-71-gd317 From f74877a5457d34d604dba6dbbb13c4c05bac8b93 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Wed, 11 Dec 2019 10:58:14 +0100 Subject: rtnetlink: provide permanent hardware address in RTM_NEWLINK Permanent hardware address of a network device was traditionally provided via ethtool ioctl interface but as Jiri Pirko pointed out in a review of ethtool netlink interface, rtnetlink is much more suitable for it so let's add it to the RTM_NEWLINK message. Add IFLA_PERM_ADDRESS attribute to RTM_NEWLINK messages unless the permanent address is all zeros (i.e. device driver did not fill it). As permanent address is not modifiable, reject userspace requests containing IFLA_PERM_ADDRESS attribute. Note: we already provide permanent hardware address for bond slaves; unfortunately we cannot drop that attribute for backward compatibility reasons. v5 -> v6: only add the attribute if permanent address is not zero Signed-off-by: Michal Kubecek Acked-by: Jiri Pirko Acked-by: Stephen Hemminger Signed-off-by: David S. Miller --- include/uapi/linux/if_link.h | 1 + net/core/rtnetlink.c | 5 +++++ 2 files changed, 6 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index 8aec8769d944..1d69f637c5d6 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -169,6 +169,7 @@ enum { IFLA_MAX_MTU, IFLA_PROP_LIST, IFLA_ALT_IFNAME, /* Alternative ifname */ + IFLA_PERM_ADDRESS, __IFLA_MAX }; diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 02916f43bf63..20bc406f3871 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -1041,6 +1041,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev, + nla_total_size(4) /* IFLA_MIN_MTU */ + nla_total_size(4) /* IFLA_MAX_MTU */ + rtnl_prop_list_size(dev) + + nla_total_size(MAX_ADDR_LEN) /* IFLA_PERM_ADDRESS */ + 0; } @@ -1757,6 +1758,9 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, nla_put_s32(skb, IFLA_NEW_IFINDEX, new_ifindex) < 0) goto nla_put_failure; + if (memchr_inv(dev->perm_addr, '\0', dev->addr_len) && + nla_put(skb, IFLA_PERM_ADDRESS, dev->addr_len, dev->perm_addr)) + goto nla_put_failure; rcu_read_lock(); if (rtnl_fill_link_af(skb, dev, ext_filter_mask)) @@ -1822,6 +1826,7 @@ static const struct nla_policy ifla_policy[IFLA_MAX+1] = { [IFLA_PROP_LIST] = { .type = NLA_NESTED }, [IFLA_ALT_IFNAME] = { .type = NLA_STRING, .len = ALTIFNAMSIZ - 1 }, + [IFLA_PERM_ADDRESS] = { .type = NLA_REJECT }, }; static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = { -- cgit v1.2.3-71-gd317 From 32d5109a9d864aea3981f0b5ea736eee4e11b42a Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Wed, 11 Dec 2019 10:58:19 +0100 Subject: netlink: rename nl80211_validate_nested() to nla_validate_nested() Function nl80211_validate_nested() is not specific to nl80211, it's a counterpart to nla_validate_nested_deprecated() with strict validation. For consistency with other validation and parse functions, rename it to nla_validate_nested(). Signed-off-by: Michal Kubecek Acked-by: Jiri Pirko Reviewed-by: Johannes Berg Signed-off-by: David S. Miller --- include/net/netlink.h | 8 ++++---- net/wireless/nl80211.c | 3 +-- 2 files changed, 5 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/include/net/netlink.h b/include/net/netlink.h index b140c8f1be22..56c365dc6dc7 100644 --- a/include/net/netlink.h +++ b/include/net/netlink.h @@ -1735,7 +1735,7 @@ static inline void nla_nest_cancel(struct sk_buff *skb, struct nlattr *start) } /** - * nla_validate_nested - Validate a stream of nested attributes + * __nla_validate_nested - Validate a stream of nested attributes * @start: container attribute * @maxtype: maximum attribute type to be expected * @policy: validation policy @@ -1758,9 +1758,9 @@ static inline int __nla_validate_nested(const struct nlattr *start, int maxtype, } static inline int -nl80211_validate_nested(const struct nlattr *start, int maxtype, - const struct nla_policy *policy, - struct netlink_ext_ack *extack) +nla_validate_nested(const struct nlattr *start, int maxtype, + const struct nla_policy *policy, + struct netlink_ext_ack *extack) { return __nla_validate_nested(start, maxtype, policy, NL_VALIDATE_STRICT, extack); diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index da5262b2298b..fa3526592c51 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -12900,8 +12900,7 @@ static int nl80211_vendor_check_policy(const struct wiphy_vendor_command *vcmd, return -EINVAL; } - return nl80211_validate_nested(attr, vcmd->maxattr, vcmd->policy, - extack); + return nla_validate_nested(attr, vcmd->maxattr, vcmd->policy, extack); } static int nl80211_vendor_cmd(struct sk_buff *skb, struct genl_info *info) -- cgit v1.2.3-71-gd317 From 428c122f5f6b54f40bd51c47495104b534b5a57c Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Wed, 11 Dec 2019 10:58:34 +0100 Subject: ethtool: provide link mode names as a string set Unlike e.g. netdev features, the ethtool ioctl interface requires link mode table to be in sync between kernel and userspace for userspace to be able to display and set all link modes supported by kernel. The way arbitrary length bitsets are implemented in netlink interface, this will be no longer needed. To allow userspace to access all link modes running kernel supports, add table of ethernet link mode names and make it available as a string set to userspace GET_STRSET requests. Add build time check to make sure names are defined for all modes declared in enum ethtool_link_mode_bit_indices. Once the string set is available, make it also accessible via ioctl. Signed-off-by: Michal Kubecek Reviewed-by: Andrew Lunn Reviewed-by: Jiri Pirko Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- include/uapi/linux/ethtool.h | 2 ++ net/ethtool/common.c | 86 ++++++++++++++++++++++++++++++++++++++++++++ net/ethtool/common.h | 5 +++ net/ethtool/ioctl.c | 6 ++++ 4 files changed, 99 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h index d4591792f0b4..f44155840b07 100644 --- a/include/uapi/linux/ethtool.h +++ b/include/uapi/linux/ethtool.h @@ -593,6 +593,7 @@ struct ethtool_pauseparam { * @ETH_SS_RSS_HASH_FUNCS: RSS hush function names * @ETH_SS_PHY_STATS: Statistic names, for use with %ETHTOOL_GPHYSTATS * @ETH_SS_PHY_TUNABLES: PHY tunable names + * @ETH_SS_LINK_MODES: link mode names */ enum ethtool_stringset { ETH_SS_TEST = 0, @@ -604,6 +605,7 @@ enum ethtool_stringset { ETH_SS_TUNABLES, ETH_SS_PHY_STATS, ETH_SS_PHY_TUNABLES, + ETH_SS_LINK_MODES, }; /** diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 8b5e11e7e0a6..0a8728565356 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -83,3 +83,89 @@ phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN] = { [ETHTOOL_PHY_FAST_LINK_DOWN] = "phy-fast-link-down", [ETHTOOL_PHY_EDPD] = "phy-energy-detect-power-down", }; + +#define __LINK_MODE_NAME(speed, type, duplex) \ + #speed "base" #type "/" #duplex +#define __DEFINE_LINK_MODE_NAME(speed, type, duplex) \ + [ETHTOOL_LINK_MODE(speed, type, duplex)] = \ + __LINK_MODE_NAME(speed, type, duplex) +#define __DEFINE_SPECIAL_MODE_NAME(_mode, _name) \ + [ETHTOOL_LINK_MODE_ ## _mode ## _BIT] = _name + +const char link_mode_names[][ETH_GSTRING_LEN] = { + __DEFINE_LINK_MODE_NAME(10, T, Half), + __DEFINE_LINK_MODE_NAME(10, T, Full), + __DEFINE_LINK_MODE_NAME(100, T, Half), + __DEFINE_LINK_MODE_NAME(100, T, Full), + __DEFINE_LINK_MODE_NAME(1000, T, Half), + __DEFINE_LINK_MODE_NAME(1000, T, Full), + __DEFINE_SPECIAL_MODE_NAME(Autoneg, "Autoneg"), + __DEFINE_SPECIAL_MODE_NAME(TP, "TP"), + __DEFINE_SPECIAL_MODE_NAME(AUI, "AUI"), + __DEFINE_SPECIAL_MODE_NAME(MII, "MII"), + __DEFINE_SPECIAL_MODE_NAME(FIBRE, "FIBRE"), + __DEFINE_SPECIAL_MODE_NAME(BNC, "BNC"), + __DEFINE_LINK_MODE_NAME(10000, T, Full), + __DEFINE_SPECIAL_MODE_NAME(Pause, "Pause"), + __DEFINE_SPECIAL_MODE_NAME(Asym_Pause, "Asym_Pause"), + __DEFINE_LINK_MODE_NAME(2500, X, Full), + __DEFINE_SPECIAL_MODE_NAME(Backplane, "Backplane"), + __DEFINE_LINK_MODE_NAME(1000, KX, Full), + __DEFINE_LINK_MODE_NAME(10000, KX4, Full), + __DEFINE_LINK_MODE_NAME(10000, KR, Full), + __DEFINE_SPECIAL_MODE_NAME(10000baseR_FEC, "10000baseR_FEC"), + __DEFINE_LINK_MODE_NAME(20000, MLD2, Full), + __DEFINE_LINK_MODE_NAME(20000, KR2, Full), + __DEFINE_LINK_MODE_NAME(40000, KR4, Full), + __DEFINE_LINK_MODE_NAME(40000, CR4, Full), + __DEFINE_LINK_MODE_NAME(40000, SR4, Full), + __DEFINE_LINK_MODE_NAME(40000, LR4, Full), + __DEFINE_LINK_MODE_NAME(56000, KR4, Full), + __DEFINE_LINK_MODE_NAME(56000, CR4, Full), + __DEFINE_LINK_MODE_NAME(56000, SR4, Full), + __DEFINE_LINK_MODE_NAME(56000, LR4, Full), + __DEFINE_LINK_MODE_NAME(25000, CR, Full), + __DEFINE_LINK_MODE_NAME(25000, KR, Full), + __DEFINE_LINK_MODE_NAME(25000, SR, Full), + __DEFINE_LINK_MODE_NAME(50000, CR2, Full), + __DEFINE_LINK_MODE_NAME(50000, KR2, Full), + __DEFINE_LINK_MODE_NAME(100000, KR4, Full), + __DEFINE_LINK_MODE_NAME(100000, SR4, Full), + __DEFINE_LINK_MODE_NAME(100000, CR4, Full), + __DEFINE_LINK_MODE_NAME(100000, LR4_ER4, Full), + __DEFINE_LINK_MODE_NAME(50000, SR2, Full), + __DEFINE_LINK_MODE_NAME(1000, X, Full), + __DEFINE_LINK_MODE_NAME(10000, CR, Full), + __DEFINE_LINK_MODE_NAME(10000, SR, Full), + __DEFINE_LINK_MODE_NAME(10000, LR, Full), + __DEFINE_LINK_MODE_NAME(10000, LRM, Full), + __DEFINE_LINK_MODE_NAME(10000, ER, Full), + __DEFINE_LINK_MODE_NAME(2500, T, Full), + __DEFINE_LINK_MODE_NAME(5000, T, Full), + __DEFINE_SPECIAL_MODE_NAME(FEC_NONE, "None"), + __DEFINE_SPECIAL_MODE_NAME(FEC_RS, "RS"), + __DEFINE_SPECIAL_MODE_NAME(FEC_BASER, "BASER"), + __DEFINE_LINK_MODE_NAME(50000, KR, Full), + __DEFINE_LINK_MODE_NAME(50000, SR, Full), + __DEFINE_LINK_MODE_NAME(50000, CR, Full), + __DEFINE_LINK_MODE_NAME(50000, LR_ER_FR, Full), + __DEFINE_LINK_MODE_NAME(50000, DR, Full), + __DEFINE_LINK_MODE_NAME(100000, KR2, Full), + __DEFINE_LINK_MODE_NAME(100000, SR2, Full), + __DEFINE_LINK_MODE_NAME(100000, CR2, Full), + __DEFINE_LINK_MODE_NAME(100000, LR2_ER2_FR2, Full), + __DEFINE_LINK_MODE_NAME(100000, DR2, Full), + __DEFINE_LINK_MODE_NAME(200000, KR4, Full), + __DEFINE_LINK_MODE_NAME(200000, SR4, Full), + __DEFINE_LINK_MODE_NAME(200000, LR4_ER4_FR4, Full), + __DEFINE_LINK_MODE_NAME(200000, DR4, Full), + __DEFINE_LINK_MODE_NAME(200000, CR4, Full), + __DEFINE_LINK_MODE_NAME(100, T1, Full), + __DEFINE_LINK_MODE_NAME(1000, T1, Full), + __DEFINE_LINK_MODE_NAME(400000, KR8, Full), + __DEFINE_LINK_MODE_NAME(400000, SR8, Full), + __DEFINE_LINK_MODE_NAME(400000, LR8_ER8_FR8, Full), + __DEFINE_LINK_MODE_NAME(400000, DR8, Full), + __DEFINE_LINK_MODE_NAME(400000, CR8, Full), +}; +static_assert(ARRAY_SIZE(link_mode_names) == __ETHTOOL_LINK_MODE_MASK_NBITS); diff --git a/net/ethtool/common.h b/net/ethtool/common.h index 336566430be4..bbb788908cb1 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -5,6 +5,10 @@ #include +/* compose link mode index from speed, type and duplex */ +#define ETHTOOL_LINK_MODE(speed, type, duplex) \ + ETHTOOL_LINK_MODE_ ## speed ## base ## type ## _ ## duplex ## _BIT + extern const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN]; extern const char @@ -13,5 +17,6 @@ extern const char tunable_strings[__ETHTOOL_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN]; +extern const char link_mode_names[][ETH_GSTRING_LEN]; #endif /* _ETHTOOL_COMMON_H */ diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index b262db5a1d91..aed2c2cf1623 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -154,6 +154,9 @@ static int __ethtool_get_sset_count(struct net_device *dev, int sset) !ops->get_ethtool_phy_stats) return phy_ethtool_get_sset_count(dev->phydev); + if (sset == ETH_SS_LINK_MODES) + return __ETHTOOL_LINK_MODE_MASK_NBITS; + if (ops->get_sset_count && ops->get_strings) return ops->get_sset_count(dev, sset); else @@ -178,6 +181,9 @@ static void __ethtool_get_strings(struct net_device *dev, else if (stringset == ETH_SS_PHY_STATS && dev->phydev && !ops->get_ethtool_phy_stats) phy_ethtool_get_strings(dev->phydev, data); + else if (stringset == ETH_SS_LINK_MODES) + memcpy(data, link_mode_names, + __ETHTOOL_LINK_MODE_MASK_NBITS * ETH_GSTRING_LEN); else /* ops->get_strings is valid because checked earlier */ ops->get_strings(dev, stringset, data); -- cgit v1.2.3-71-gd317 From 0290bd291cc0e0488e35e66bf39efcd7d9d9122b Mon Sep 17 00:00:00 2001 From: "Michael S. Tsirkin" Date: Tue, 10 Dec 2019 09:23:51 -0500 Subject: netdev: pass the stuck queue to the timeout handler This allows incrementing the correct timeout statistic without any mess. Down the road, devices can learn to reset just the specific queue. The patch was generated with the following script: use strict; use warnings; our $^I = '.bak'; my @work = ( ["arch/m68k/emu/nfeth.c", "nfeth_tx_timeout"], ["arch/um/drivers/net_kern.c", "uml_net_tx_timeout"], ["arch/um/drivers/vector_kern.c", "vector_net_tx_timeout"], ["arch/xtensa/platforms/iss/network.c", "iss_net_tx_timeout"], ["drivers/char/pcmcia/synclink_cs.c", "hdlcdev_tx_timeout"], ["drivers/infiniband/ulp/ipoib/ipoib_main.c", "ipoib_timeout"], ["drivers/infiniband/ulp/ipoib/ipoib_main.c", "ipoib_timeout"], ["drivers/message/fusion/mptlan.c", "mpt_lan_tx_timeout"], ["drivers/misc/sgi-xp/xpnet.c", "xpnet_dev_tx_timeout"], ["drivers/net/appletalk/cops.c", "cops_timeout"], ["drivers/net/arcnet/arcdevice.h", "arcnet_timeout"], ["drivers/net/arcnet/arcnet.c", "arcnet_timeout"], ["drivers/net/arcnet/com20020.c", "arcnet_timeout"], ["drivers/net/ethernet/3com/3c509.c", "el3_tx_timeout"], ["drivers/net/ethernet/3com/3c515.c", "corkscrew_timeout"], ["drivers/net/ethernet/3com/3c574_cs.c", "el3_tx_timeout"], ["drivers/net/ethernet/3com/3c589_cs.c", "el3_tx_timeout"], ["drivers/net/ethernet/3com/3c59x.c", "vortex_tx_timeout"], ["drivers/net/ethernet/3com/3c59x.c", "vortex_tx_timeout"], ["drivers/net/ethernet/3com/typhoon.c", "typhoon_tx_timeout"], ["drivers/net/ethernet/8390/8390.h", "ei_tx_timeout"], ["drivers/net/ethernet/8390/8390.h", "eip_tx_timeout"], ["drivers/net/ethernet/8390/8390.c", "ei_tx_timeout"], ["drivers/net/ethernet/8390/8390p.c", "eip_tx_timeout"], ["drivers/net/ethernet/8390/ax88796.c", "ax_ei_tx_timeout"], ["drivers/net/ethernet/8390/axnet_cs.c", "axnet_tx_timeout"], ["drivers/net/ethernet/8390/etherh.c", "__ei_tx_timeout"], ["drivers/net/ethernet/8390/hydra.c", "__ei_tx_timeout"], ["drivers/net/ethernet/8390/mac8390.c", "__ei_tx_timeout"], ["drivers/net/ethernet/8390/mcf8390.c", "__ei_tx_timeout"], ["drivers/net/ethernet/8390/lib8390.c", "__ei_tx_timeout"], ["drivers/net/ethernet/8390/ne2k-pci.c", "ei_tx_timeout"], ["drivers/net/ethernet/8390/pcnet_cs.c", "ei_tx_timeout"], ["drivers/net/ethernet/8390/smc-ultra.c", "ei_tx_timeout"], ["drivers/net/ethernet/8390/wd.c", "ei_tx_timeout"], ["drivers/net/ethernet/8390/zorro8390.c", "__ei_tx_timeout"], ["drivers/net/ethernet/adaptec/starfire.c", "tx_timeout"], ["drivers/net/ethernet/agere/et131x.c", "et131x_tx_timeout"], ["drivers/net/ethernet/allwinner/sun4i-emac.c", "emac_timeout"], ["drivers/net/ethernet/alteon/acenic.c", "ace_watchdog"], ["drivers/net/ethernet/amazon/ena/ena_netdev.c", "ena_tx_timeout"], ["drivers/net/ethernet/amd/7990.h", "lance_tx_timeout"], ["drivers/net/ethernet/amd/7990.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/a2065.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/am79c961a.c", "am79c961_timeout"], ["drivers/net/ethernet/amd/amd8111e.c", "amd8111e_tx_timeout"], ["drivers/net/ethernet/amd/ariadne.c", "ariadne_tx_timeout"], ["drivers/net/ethernet/amd/atarilance.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/au1000_eth.c", "au1000_tx_timeout"], ["drivers/net/ethernet/amd/declance.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/lance.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/mvme147.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/ni65.c", "ni65_timeout"], ["drivers/net/ethernet/amd/nmclan_cs.c", "mace_tx_timeout"], ["drivers/net/ethernet/amd/pcnet32.c", "pcnet32_tx_timeout"], ["drivers/net/ethernet/amd/sunlance.c", "lance_tx_timeout"], ["drivers/net/ethernet/amd/xgbe/xgbe-drv.c", "xgbe_tx_timeout"], ["drivers/net/ethernet/apm/xgene-v2/main.c", "xge_timeout"], ["drivers/net/ethernet/apm/xgene/xgene_enet_main.c", "xgene_enet_timeout"], ["drivers/net/ethernet/apple/macmace.c", "mace_tx_timeout"], ["drivers/net/ethernet/atheros/ag71xx.c", "ag71xx_tx_timeout"], ["drivers/net/ethernet/atheros/alx/main.c", "alx_tx_timeout"], ["drivers/net/ethernet/atheros/atl1c/atl1c_main.c", "atl1c_tx_timeout"], ["drivers/net/ethernet/atheros/atl1e/atl1e_main.c", "atl1e_tx_timeout"], ["drivers/net/ethernet/atheros/atlx/atl.c", "atlx_tx_timeout"], ["drivers/net/ethernet/atheros/atlx/atl1.c", "atlx_tx_timeout"], ["drivers/net/ethernet/atheros/atlx/atl2.c", "atl2_tx_timeout"], ["drivers/net/ethernet/broadcom/b44.c", "b44_tx_timeout"], ["drivers/net/ethernet/broadcom/bcmsysport.c", "bcm_sysport_tx_timeout"], ["drivers/net/ethernet/broadcom/bnx2.c", "bnx2_tx_timeout"], ["drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h", "bnx2x_tx_timeout"], ["drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c", "bnx2x_tx_timeout"], ["drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c", "bnx2x_tx_timeout"], ["drivers/net/ethernet/broadcom/bnxt/bnxt.c", "bnxt_tx_timeout"], ["drivers/net/ethernet/broadcom/genet/bcmgenet.c", "bcmgenet_timeout"], ["drivers/net/ethernet/broadcom/sb1250-mac.c", "sbmac_tx_timeout"], ["drivers/net/ethernet/broadcom/tg3.c", "tg3_tx_timeout"], ["drivers/net/ethernet/calxeda/xgmac.c", "xgmac_tx_timeout"], ["drivers/net/ethernet/cavium/liquidio/lio_main.c", "liquidio_tx_timeout"], ["drivers/net/ethernet/cavium/liquidio/lio_vf_main.c", "liquidio_tx_timeout"], ["drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c", "lio_vf_rep_tx_timeout"], ["drivers/net/ethernet/cavium/thunder/nicvf_main.c", "nicvf_tx_timeout"], ["drivers/net/ethernet/cirrus/cs89x0.c", "net_timeout"], ["drivers/net/ethernet/cisco/enic/enic_main.c", "enic_tx_timeout"], ["drivers/net/ethernet/cisco/enic/enic_main.c", "enic_tx_timeout"], ["drivers/net/ethernet/cortina/gemini.c", "gmac_tx_timeout"], ["drivers/net/ethernet/davicom/dm9000.c", "dm9000_timeout"], ["drivers/net/ethernet/dec/tulip/de2104x.c", "de_tx_timeout"], ["drivers/net/ethernet/dec/tulip/tulip_core.c", "tulip_tx_timeout"], ["drivers/net/ethernet/dec/tulip/winbond-840.c", "tx_timeout"], ["drivers/net/ethernet/dlink/dl2k.c", "rio_tx_timeout"], ["drivers/net/ethernet/dlink/sundance.c", "tx_timeout"], ["drivers/net/ethernet/emulex/benet/be_main.c", "be_tx_timeout"], ["drivers/net/ethernet/ethoc.c", "ethoc_tx_timeout"], ["drivers/net/ethernet/faraday/ftgmac100.c", "ftgmac100_tx_timeout"], ["drivers/net/ethernet/fealnx.c", "fealnx_tx_timeout"], ["drivers/net/ethernet/freescale/dpaa/dpaa_eth.c", "dpaa_tx_timeout"], ["drivers/net/ethernet/freescale/fec_main.c", "fec_timeout"], ["drivers/net/ethernet/freescale/fec_mpc52xx.c", "mpc52xx_fec_tx_timeout"], ["drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c", "fs_timeout"], ["drivers/net/ethernet/freescale/gianfar.c", "gfar_timeout"], ["drivers/net/ethernet/freescale/ucc_geth.c", "ucc_geth_timeout"], ["drivers/net/ethernet/fujitsu/fmvj18x_cs.c", "fjn_tx_timeout"], ["drivers/net/ethernet/google/gve/gve_main.c", "gve_tx_timeout"], ["drivers/net/ethernet/hisilicon/hip04_eth.c", "hip04_timeout"], ["drivers/net/ethernet/hisilicon/hix5hd2_gmac.c", "hix5hd2_net_timeout"], ["drivers/net/ethernet/hisilicon/hns/hns_enet.c", "hns_nic_net_timeout"], ["drivers/net/ethernet/hisilicon/hns3/hns3_enet.c", "hns3_nic_net_timeout"], ["drivers/net/ethernet/huawei/hinic/hinic_main.c", "hinic_tx_timeout"], ["drivers/net/ethernet/i825xx/82596.c", "i596_tx_timeout"], ["drivers/net/ethernet/i825xx/ether1.c", "ether1_timeout"], ["drivers/net/ethernet/i825xx/lib82596.c", "i596_tx_timeout"], ["drivers/net/ethernet/i825xx/sun3_82586.c", "sun3_82586_timeout"], ["drivers/net/ethernet/ibm/ehea/ehea_main.c", "ehea_tx_watchdog"], ["drivers/net/ethernet/ibm/emac/core.c", "emac_tx_timeout"], ["drivers/net/ethernet/ibm/emac/core.c", "emac_tx_timeout"], ["drivers/net/ethernet/ibm/ibmvnic.c", "ibmvnic_tx_timeout"], ["drivers/net/ethernet/intel/e100.c", "e100_tx_timeout"], ["drivers/net/ethernet/intel/e1000/e1000_main.c", "e1000_tx_timeout"], ["drivers/net/ethernet/intel/e1000e/netdev.c", "e1000_tx_timeout"], ["drivers/net/ethernet/intel/fm10k/fm10k_netdev.c", "fm10k_tx_timeout"], ["drivers/net/ethernet/intel/i40e/i40e_main.c", "i40e_tx_timeout"], ["drivers/net/ethernet/intel/iavf/iavf_main.c", "iavf_tx_timeout"], ["drivers/net/ethernet/intel/ice/ice_main.c", "ice_tx_timeout"], ["drivers/net/ethernet/intel/ice/ice_main.c", "ice_tx_timeout"], ["drivers/net/ethernet/intel/igb/igb_main.c", "igb_tx_timeout"], ["drivers/net/ethernet/intel/igbvf/netdev.c", "igbvf_tx_timeout"], ["drivers/net/ethernet/intel/ixgb/ixgb_main.c", "ixgb_tx_timeout"], ["drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c", "adapter->netdev->netdev_ops->ndo_tx_timeout(adapter->netdev);"], ["drivers/net/ethernet/intel/ixgbe/ixgbe_main.c", "ixgbe_tx_timeout"], ["drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c", "ixgbevf_tx_timeout"], ["drivers/net/ethernet/jme.c", "jme_tx_timeout"], ["drivers/net/ethernet/korina.c", "korina_tx_timeout"], ["drivers/net/ethernet/lantiq_etop.c", "ltq_etop_tx_timeout"], ["drivers/net/ethernet/marvell/mv643xx_eth.c", "mv643xx_eth_tx_timeout"], ["drivers/net/ethernet/marvell/pxa168_eth.c", "pxa168_eth_tx_timeout"], ["drivers/net/ethernet/marvell/skge.c", "skge_tx_timeout"], ["drivers/net/ethernet/marvell/sky2.c", "sky2_tx_timeout"], ["drivers/net/ethernet/marvell/sky2.c", "sky2_tx_timeout"], ["drivers/net/ethernet/mediatek/mtk_eth_soc.c", "mtk_tx_timeout"], ["drivers/net/ethernet/mellanox/mlx4/en_netdev.c", "mlx4_en_tx_timeout"], ["drivers/net/ethernet/mellanox/mlx4/en_netdev.c", "mlx4_en_tx_timeout"], ["drivers/net/ethernet/mellanox/mlx5/core/en_main.c", "mlx5e_tx_timeout"], ["drivers/net/ethernet/micrel/ks8842.c", "ks8842_tx_timeout"], ["drivers/net/ethernet/micrel/ksz884x.c", "netdev_tx_timeout"], ["drivers/net/ethernet/microchip/enc28j60.c", "enc28j60_tx_timeout"], ["drivers/net/ethernet/microchip/encx24j600.c", "encx24j600_tx_timeout"], ["drivers/net/ethernet/natsemi/sonic.h", "sonic_tx_timeout"], ["drivers/net/ethernet/natsemi/sonic.c", "sonic_tx_timeout"], ["drivers/net/ethernet/natsemi/jazzsonic.c", "sonic_tx_timeout"], ["drivers/net/ethernet/natsemi/macsonic.c", "sonic_tx_timeout"], ["drivers/net/ethernet/natsemi/natsemi.c", "ns_tx_timeout"], ["drivers/net/ethernet/natsemi/ns83820.c", "ns83820_tx_timeout"], ["drivers/net/ethernet/natsemi/xtsonic.c", "sonic_tx_timeout"], ["drivers/net/ethernet/neterion/s2io.h", "s2io_tx_watchdog"], ["drivers/net/ethernet/neterion/s2io.c", "s2io_tx_watchdog"], ["drivers/net/ethernet/neterion/vxge/vxge-main.c", "vxge_tx_watchdog"], ["drivers/net/ethernet/netronome/nfp/nfp_net_common.c", "nfp_net_tx_timeout"], ["drivers/net/ethernet/nvidia/forcedeth.c", "nv_tx_timeout"], ["drivers/net/ethernet/nvidia/forcedeth.c", "nv_tx_timeout"], ["drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c", "pch_gbe_tx_timeout"], ["drivers/net/ethernet/packetengines/hamachi.c", "hamachi_tx_timeout"], ["drivers/net/ethernet/packetengines/yellowfin.c", "yellowfin_tx_timeout"], ["drivers/net/ethernet/pensando/ionic/ionic_lif.c", "ionic_tx_timeout"], ["drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c", "netxen_tx_timeout"], ["drivers/net/ethernet/qlogic/qla3xxx.c", "ql3xxx_tx_timeout"], ["drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c", "qlcnic_tx_timeout"], ["drivers/net/ethernet/qualcomm/emac/emac.c", "emac_tx_timeout"], ["drivers/net/ethernet/qualcomm/qca_spi.c", "qcaspi_netdev_tx_timeout"], ["drivers/net/ethernet/qualcomm/qca_uart.c", "qcauart_netdev_tx_timeout"], ["drivers/net/ethernet/rdc/r6040.c", "r6040_tx_timeout"], ["drivers/net/ethernet/realtek/8139cp.c", "cp_tx_timeout"], ["drivers/net/ethernet/realtek/8139too.c", "rtl8139_tx_timeout"], ["drivers/net/ethernet/realtek/atp.c", "tx_timeout"], ["drivers/net/ethernet/realtek/r8169_main.c", "rtl8169_tx_timeout"], ["drivers/net/ethernet/renesas/ravb_main.c", "ravb_tx_timeout"], ["drivers/net/ethernet/renesas/sh_eth.c", "sh_eth_tx_timeout"], ["drivers/net/ethernet/renesas/sh_eth.c", "sh_eth_tx_timeout"], ["drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c", "sxgbe_tx_timeout"], ["drivers/net/ethernet/seeq/ether3.c", "ether3_timeout"], ["drivers/net/ethernet/seeq/sgiseeq.c", "timeout"], ["drivers/net/ethernet/sfc/efx.c", "efx_watchdog"], ["drivers/net/ethernet/sfc/falcon/efx.c", "ef4_watchdog"], ["drivers/net/ethernet/sgi/ioc3-eth.c", "ioc3_timeout"], ["drivers/net/ethernet/sgi/meth.c", "meth_tx_timeout"], ["drivers/net/ethernet/silan/sc92031.c", "sc92031_tx_timeout"], ["drivers/net/ethernet/sis/sis190.c", "sis190_tx_timeout"], ["drivers/net/ethernet/sis/sis900.c", "sis900_tx_timeout"], ["drivers/net/ethernet/smsc/epic100.c", "epic_tx_timeout"], ["drivers/net/ethernet/smsc/smc911x.c", "smc911x_timeout"], ["drivers/net/ethernet/smsc/smc9194.c", "smc_timeout"], ["drivers/net/ethernet/smsc/smc91c92_cs.c", "smc_tx_timeout"], ["drivers/net/ethernet/smsc/smc91x.c", "smc_timeout"], ["drivers/net/ethernet/stmicro/stmmac/stmmac_main.c", "stmmac_tx_timeout"], ["drivers/net/ethernet/sun/cassini.c", "cas_tx_timeout"], ["drivers/net/ethernet/sun/ldmvsw.c", "sunvnet_tx_timeout_common"], ["drivers/net/ethernet/sun/niu.c", "niu_tx_timeout"], ["drivers/net/ethernet/sun/sunbmac.c", "bigmac_tx_timeout"], ["drivers/net/ethernet/sun/sungem.c", "gem_tx_timeout"], ["drivers/net/ethernet/sun/sunhme.c", "happy_meal_tx_timeout"], ["drivers/net/ethernet/sun/sunqe.c", "qe_tx_timeout"], ["drivers/net/ethernet/sun/sunvnet.c", "sunvnet_tx_timeout_common"], ["drivers/net/ethernet/sun/sunvnet_common.c", "sunvnet_tx_timeout_common"], ["drivers/net/ethernet/sun/sunvnet_common.h", "sunvnet_tx_timeout_common"], ["drivers/net/ethernet/synopsys/dwc-xlgmac-net.c", "xlgmac_tx_timeout"], ["drivers/net/ethernet/ti/cpmac.c", "cpmac_tx_timeout"], ["drivers/net/ethernet/ti/cpsw.c", "cpsw_ndo_tx_timeout"], ["drivers/net/ethernet/ti/cpsw_priv.c", "cpsw_ndo_tx_timeout"], ["drivers/net/ethernet/ti/cpsw_priv.h", "cpsw_ndo_tx_timeout"], ["drivers/net/ethernet/ti/davinci_emac.c", "emac_dev_tx_timeout"], ["drivers/net/ethernet/ti/netcp_core.c", "netcp_ndo_tx_timeout"], ["drivers/net/ethernet/ti/tlan.c", "tlan_tx_timeout"], ["drivers/net/ethernet/toshiba/ps3_gelic_net.h", "gelic_net_tx_timeout"], ["drivers/net/ethernet/toshiba/ps3_gelic_net.c", "gelic_net_tx_timeout"], ["drivers/net/ethernet/toshiba/ps3_gelic_wireless.c", "gelic_net_tx_timeout"], ["drivers/net/ethernet/toshiba/spider_net.c", "spider_net_tx_timeout"], ["drivers/net/ethernet/toshiba/tc35815.c", "tc35815_tx_timeout"], ["drivers/net/ethernet/via/via-rhine.c", "rhine_tx_timeout"], ["drivers/net/ethernet/wiznet/w5100.c", "w5100_tx_timeout"], ["drivers/net/ethernet/wiznet/w5300.c", "w5300_tx_timeout"], ["drivers/net/ethernet/xilinx/xilinx_emaclite.c", "xemaclite_tx_timeout"], ["drivers/net/ethernet/xircom/xirc2ps_cs.c", "xirc_tx_timeout"], ["drivers/net/fjes/fjes_main.c", "fjes_tx_retry"], ["drivers/net/slip/slip.c", "sl_tx_timeout"], ["include/linux/usb/usbnet.h", "usbnet_tx_timeout"], ["drivers/net/usb/aqc111.c", "usbnet_tx_timeout"], ["drivers/net/usb/asix_devices.c", "usbnet_tx_timeout"], ["drivers/net/usb/asix_devices.c", "usbnet_tx_timeout"], ["drivers/net/usb/asix_devices.c", "usbnet_tx_timeout"], ["drivers/net/usb/ax88172a.c", "usbnet_tx_timeout"], ["drivers/net/usb/ax88179_178a.c", "usbnet_tx_timeout"], ["drivers/net/usb/catc.c", "catc_tx_timeout"], ["drivers/net/usb/cdc_mbim.c", "usbnet_tx_timeout"], ["drivers/net/usb/cdc_ncm.c", "usbnet_tx_timeout"], ["drivers/net/usb/dm9601.c", "usbnet_tx_timeout"], ["drivers/net/usb/hso.c", "hso_net_tx_timeout"], ["drivers/net/usb/int51x1.c", "usbnet_tx_timeout"], ["drivers/net/usb/ipheth.c", "ipheth_tx_timeout"], ["drivers/net/usb/kaweth.c", "kaweth_tx_timeout"], ["drivers/net/usb/lan78xx.c", "lan78xx_tx_timeout"], ["drivers/net/usb/mcs7830.c", "usbnet_tx_timeout"], ["drivers/net/usb/pegasus.c", "pegasus_tx_timeout"], ["drivers/net/usb/qmi_wwan.c", "usbnet_tx_timeout"], ["drivers/net/usb/r8152.c", "rtl8152_tx_timeout"], ["drivers/net/usb/rndis_host.c", "usbnet_tx_timeout"], ["drivers/net/usb/rtl8150.c", "rtl8150_tx_timeout"], ["drivers/net/usb/sierra_net.c", "usbnet_tx_timeout"], ["drivers/net/usb/smsc75xx.c", "usbnet_tx_timeout"], ["drivers/net/usb/smsc95xx.c", "usbnet_tx_timeout"], ["drivers/net/usb/sr9700.c", "usbnet_tx_timeout"], ["drivers/net/usb/sr9800.c", "usbnet_tx_timeout"], ["drivers/net/usb/usbnet.c", "usbnet_tx_timeout"], ["drivers/net/vmxnet3/vmxnet3_drv.c", "vmxnet3_tx_timeout"], ["drivers/net/wan/cosa.c", "cosa_net_timeout"], ["drivers/net/wan/farsync.c", "fst_tx_timeout"], ["drivers/net/wan/fsl_ucc_hdlc.c", "uhdlc_tx_timeout"], ["drivers/net/wan/lmc/lmc_main.c", "lmc_driver_timeout"], ["drivers/net/wan/x25_asy.c", "x25_asy_timeout"], ["drivers/net/wimax/i2400m/netdev.c", "i2400m_tx_timeout"], ["drivers/net/wireless/intel/ipw2x00/ipw2100.c", "ipw2100_tx_timeout"], ["drivers/net/wireless/intersil/hostap/hostap_main.c", "prism2_tx_timeout"], ["drivers/net/wireless/intersil/hostap/hostap_main.c", "prism2_tx_timeout"], ["drivers/net/wireless/intersil/hostap/hostap_main.c", "prism2_tx_timeout"], ["drivers/net/wireless/intersil/orinoco/main.c", "orinoco_tx_timeout"], ["drivers/net/wireless/intersil/orinoco/orinoco_usb.c", "orinoco_tx_timeout"], ["drivers/net/wireless/intersil/orinoco/orinoco.h", "orinoco_tx_timeout"], ["drivers/net/wireless/intersil/prism54/islpci_dev.c", "islpci_eth_tx_timeout"], ["drivers/net/wireless/intersil/prism54/islpci_eth.c", "islpci_eth_tx_timeout"], ["drivers/net/wireless/intersil/prism54/islpci_eth.h", "islpci_eth_tx_timeout"], ["drivers/net/wireless/marvell/mwifiex/main.c", "mwifiex_tx_timeout"], ["drivers/net/wireless/quantenna/qtnfmac/core.c", "qtnf_netdev_tx_timeout"], ["drivers/net/wireless/quantenna/qtnfmac/core.h", "qtnf_netdev_tx_timeout"], ["drivers/net/wireless/rndis_wlan.c", "usbnet_tx_timeout"], ["drivers/net/wireless/wl3501_cs.c", "wl3501_tx_timeout"], ["drivers/net/wireless/zydas/zd1201.c", "zd1201_tx_timeout"], ["drivers/s390/net/qeth_core.h", "qeth_tx_timeout"], ["drivers/s390/net/qeth_core_main.c", "qeth_tx_timeout"], ["drivers/s390/net/qeth_l2_main.c", "qeth_tx_timeout"], ["drivers/s390/net/qeth_l2_main.c", "qeth_tx_timeout"], ["drivers/s390/net/qeth_l3_main.c", "qeth_tx_timeout"], ["drivers/s390/net/qeth_l3_main.c", "qeth_tx_timeout"], ["drivers/staging/ks7010/ks_wlan_net.c", "ks_wlan_tx_timeout"], ["drivers/staging/qlge/qlge_main.c", "qlge_tx_timeout"], ["drivers/staging/rtl8192e/rtl8192e/rtl_core.c", "_rtl92e_tx_timeout"], ["drivers/staging/rtl8192u/r8192U_core.c", "tx_timeout"], ["drivers/staging/unisys/visornic/visornic_main.c", "visornic_xmit_timeout"], ["drivers/staging/wlan-ng/p80211netdev.c", "p80211knetdev_tx_timeout"], ["drivers/tty/n_gsm.c", "gsm_mux_net_tx_timeout"], ["drivers/tty/synclink.c", "hdlcdev_tx_timeout"], ["drivers/tty/synclink_gt.c", "hdlcdev_tx_timeout"], ["drivers/tty/synclinkmp.c", "hdlcdev_tx_timeout"], ["net/atm/lec.c", "lec_tx_timeout"], ["net/bluetooth/bnep/netdev.c", "bnep_net_timeout"] ); for my $p (@work) { my @pair = @$p; my $file = $pair[0]; my $func = $pair[1]; print STDERR $file , ": ", $func,"\n"; our @ARGV = ($file); while () { if (m/($func\s*\(struct\s+net_device\s+\*[A-Za-z_]?[A-Za-z-0-9_]*)(\))/) { print STDERR "found $1+$2 in $file\n"; } if (s/($func\s*\(struct\s+net_device\s+\*[A-Za-z_]?[A-Za-z-0-9_]*)(\))/$1, unsigned int txqueue$2/) { print STDERR "$func found in $file\n"; } print; } } where the list of files and functions is simply from: git grep ndo_tx_timeout, with manual addition of headers in the rare cases where the function is from a header, then manually changing the few places which actually call ndo_tx_timeout. Signed-off-by: Michael S. Tsirkin Acked-by: Heiner Kallweit Acked-by: Jakub Kicinski Acked-by: Shannon Nelson Reviewed-by: Martin Habets changes from v9: fixup a forward declaration changes from v9: more leftovers from v3 change changes from v8: fix up a missing direct call to timeout rebased on net-next changes from v7: fixup leftovers from v3 change changes from v6: fix typo in rtl driver changes from v5: add missing files (allow any net device argument name) changes from v4: add a missing driver header changes from v3: change queue # to unsigned Changes from v2: added headers Changes from v1: Fix errors found by kbuild: generalize the pattern a bit, to pick up a couple of instances missed by the previous version. Signed-off-by: David S. Miller --- arch/m68k/emu/nfeth.c | 2 +- arch/um/drivers/net_kern.c | 2 +- arch/um/drivers/vector_kern.c | 2 +- arch/xtensa/platforms/iss/network.c | 2 +- drivers/char/pcmcia/synclink_cs.c | 2 +- drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +- drivers/message/fusion/mptlan.c | 2 +- drivers/misc/sgi-xp/xpnet.c | 2 +- drivers/net/appletalk/cops.c | 4 ++-- drivers/net/arcnet/arcdevice.h | 2 +- drivers/net/arcnet/arcnet.c | 2 +- drivers/net/ethernet/3com/3c509.c | 4 ++-- drivers/net/ethernet/3com/3c515.c | 4 ++-- drivers/net/ethernet/3com/3c574_cs.c | 4 ++-- drivers/net/ethernet/3com/3c589_cs.c | 4 ++-- drivers/net/ethernet/3com/3c59x.c | 4 ++-- drivers/net/ethernet/3com/typhoon.c | 2 +- drivers/net/ethernet/8390/8390.c | 4 ++-- drivers/net/ethernet/8390/8390.h | 4 ++-- drivers/net/ethernet/8390/8390p.c | 4 ++-- drivers/net/ethernet/8390/axnet_cs.c | 4 ++-- drivers/net/ethernet/8390/lib8390.c | 2 +- drivers/net/ethernet/adaptec/starfire.c | 4 ++-- drivers/net/ethernet/agere/et131x.c | 2 +- drivers/net/ethernet/allwinner/sun4i-emac.c | 2 +- drivers/net/ethernet/alteon/acenic.c | 4 ++-- drivers/net/ethernet/amazon/ena/ena_netdev.c | 2 +- drivers/net/ethernet/amd/7990.c | 2 +- drivers/net/ethernet/amd/7990.h | 2 +- drivers/net/ethernet/amd/a2065.c | 2 +- drivers/net/ethernet/amd/am79c961a.c | 2 +- drivers/net/ethernet/amd/amd8111e.c | 2 +- drivers/net/ethernet/amd/ariadne.c | 2 +- drivers/net/ethernet/amd/atarilance.c | 4 ++-- drivers/net/ethernet/amd/au1000_eth.c | 2 +- drivers/net/ethernet/amd/declance.c | 2 +- drivers/net/ethernet/amd/lance.c | 4 ++-- drivers/net/ethernet/amd/ni65.c | 4 ++-- drivers/net/ethernet/amd/nmclan_cs.c | 4 ++-- drivers/net/ethernet/amd/pcnet32.c | 4 ++-- drivers/net/ethernet/amd/sunlance.c | 2 +- drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 2 +- drivers/net/ethernet/apm/xgene-v2/main.c | 2 +- drivers/net/ethernet/apm/xgene/xgene_enet_main.c | 2 +- drivers/net/ethernet/apple/macmace.c | 4 ++-- drivers/net/ethernet/atheros/ag71xx.c | 2 +- drivers/net/ethernet/atheros/alx/main.c | 2 +- drivers/net/ethernet/atheros/atl1c/atl1c_main.c | 2 +- drivers/net/ethernet/atheros/atl1e/atl1e_main.c | 2 +- drivers/net/ethernet/atheros/atlx/atl2.c | 2 +- drivers/net/ethernet/atheros/atlx/atlx.c | 2 +- drivers/net/ethernet/broadcom/b44.c | 2 +- drivers/net/ethernet/broadcom/bcmsysport.c | 2 +- drivers/net/ethernet/broadcom/bnx2.c | 2 +- drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c | 2 +- drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +- drivers/net/ethernet/broadcom/genet/bcmgenet.c | 2 +- drivers/net/ethernet/broadcom/sb1250-mac.c | 4 ++-- drivers/net/ethernet/broadcom/tg3.c | 2 +- drivers/net/ethernet/calxeda/xgmac.c | 2 +- drivers/net/ethernet/cavium/liquidio/lio_main.c | 2 +- drivers/net/ethernet/cavium/liquidio/lio_vf_main.c | 2 +- drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c | 4 ++-- drivers/net/ethernet/cavium/thunder/nicvf_main.c | 2 +- drivers/net/ethernet/cirrus/cs89x0.c | 2 +- drivers/net/ethernet/cisco/enic/enic_main.c | 2 +- drivers/net/ethernet/cortina/gemini.c | 2 +- drivers/net/ethernet/davicom/dm9000.c | 2 +- drivers/net/ethernet/dec/tulip/de2104x.c | 2 +- drivers/net/ethernet/dec/tulip/tulip_core.c | 4 ++-- drivers/net/ethernet/dec/tulip/winbond-840.c | 4 ++-- drivers/net/ethernet/dlink/dl2k.c | 4 ++-- drivers/net/ethernet/dlink/sundance.c | 4 ++-- drivers/net/ethernet/emulex/benet/be_main.c | 2 +- drivers/net/ethernet/ethoc.c | 2 +- drivers/net/ethernet/faraday/ftgmac100.c | 2 +- drivers/net/ethernet/fealnx.c | 4 ++-- drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 2 +- drivers/net/ethernet/freescale/fec_main.c | 2 +- drivers/net/ethernet/freescale/fec_mpc52xx.c | 2 +- drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c | 2 +- drivers/net/ethernet/freescale/gianfar.c | 2 +- drivers/net/ethernet/freescale/ucc_geth.c | 2 +- drivers/net/ethernet/fujitsu/fmvj18x_cs.c | 4 ++-- drivers/net/ethernet/google/gve/gve_main.c | 2 +- drivers/net/ethernet/hisilicon/hip04_eth.c | 2 +- drivers/net/ethernet/hisilicon/hix5hd2_gmac.c | 2 +- drivers/net/ethernet/hisilicon/hns/hns_enet.c | 2 +- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +- drivers/net/ethernet/huawei/hinic/hinic_main.c | 2 +- drivers/net/ethernet/i825xx/82596.c | 4 ++-- drivers/net/ethernet/i825xx/ether1.c | 4 ++-- drivers/net/ethernet/i825xx/lib82596.c | 4 ++-- drivers/net/ethernet/i825xx/sun3_82586.c | 4 ++-- drivers/net/ethernet/ibm/ehea/ehea_main.c | 2 +- drivers/net/ethernet/ibm/emac/core.c | 2 +- drivers/net/ethernet/ibm/ibmvnic.c | 2 +- drivers/net/ethernet/intel/e100.c | 2 +- drivers/net/ethernet/intel/e1000/e1000_main.c | 4 ++-- drivers/net/ethernet/intel/e1000e/netdev.c | 2 +- drivers/net/ethernet/intel/fm10k/fm10k_netdev.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +- drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/igb/igb_main.c | 4 ++-- drivers/net/ethernet/intel/igbvf/netdev.c | 2 +- drivers/net/ethernet/intel/ixgb/ixgb_main.c | 4 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c | 4 +++- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 2 +- drivers/net/ethernet/jme.c | 2 +- drivers/net/ethernet/korina.c | 2 +- drivers/net/ethernet/lantiq_etop.c | 2 +- drivers/net/ethernet/marvell/mv643xx_eth.c | 2 +- drivers/net/ethernet/marvell/pxa168_eth.c | 2 +- drivers/net/ethernet/marvell/skge.c | 2 +- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 +- drivers/net/ethernet/micrel/ks8842.c | 2 +- drivers/net/ethernet/micrel/ksz884x.c | 2 +- drivers/net/ethernet/microchip/enc28j60.c | 2 +- drivers/net/ethernet/microchip/encx24j600.c | 2 +- drivers/net/ethernet/natsemi/natsemi.c | 4 ++-- drivers/net/ethernet/natsemi/ns83820.c | 4 ++-- drivers/net/ethernet/natsemi/sonic.c | 2 +- drivers/net/ethernet/natsemi/sonic.h | 2 +- drivers/net/ethernet/neterion/s2io.c | 2 +- drivers/net/ethernet/neterion/s2io.h | 2 +- drivers/net/ethernet/neterion/vxge/vxge-main.c | 2 +- drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 2 +- drivers/net/ethernet/nvidia/forcedeth.c | 2 +- drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c | 2 +- drivers/net/ethernet/packetengines/hamachi.c | 4 ++-- drivers/net/ethernet/packetengines/yellowfin.c | 4 ++-- drivers/net/ethernet/pensando/ionic/ionic_lif.c | 2 +- drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c | 4 ++-- drivers/net/ethernet/qlogic/qla3xxx.c | 2 +- drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c | 4 ++-- drivers/net/ethernet/qualcomm/emac/emac.c | 2 +- drivers/net/ethernet/qualcomm/qca_spi.c | 2 +- drivers/net/ethernet/qualcomm/qca_uart.c | 2 +- drivers/net/ethernet/rdc/r6040.c | 2 +- drivers/net/ethernet/realtek/8139cp.c | 2 +- drivers/net/ethernet/realtek/8139too.c | 4 ++-- drivers/net/ethernet/realtek/atp.c | 4 ++-- drivers/net/ethernet/realtek/r8169_main.c | 2 +- drivers/net/ethernet/renesas/ravb_main.c | 2 +- drivers/net/ethernet/renesas/sh_eth.c | 2 +- drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c | 2 +- drivers/net/ethernet/seeq/ether3.c | 4 ++-- drivers/net/ethernet/seeq/sgiseeq.c | 2 +- drivers/net/ethernet/sfc/efx.c | 2 +- drivers/net/ethernet/sfc/falcon/efx.c | 2 +- drivers/net/ethernet/sgi/ioc3-eth.c | 4 ++-- drivers/net/ethernet/sgi/meth.c | 4 ++-- drivers/net/ethernet/silan/sc92031.c | 2 +- drivers/net/ethernet/sis/sis190.c | 2 +- drivers/net/ethernet/sis/sis900.c | 4 ++-- drivers/net/ethernet/smsc/epic100.c | 4 ++-- drivers/net/ethernet/smsc/smc911x.c | 2 +- drivers/net/ethernet/smsc/smc9194.c | 4 ++-- drivers/net/ethernet/smsc/smc91c92_cs.c | 4 ++-- drivers/net/ethernet/smsc/smc91x.c | 2 +- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +- drivers/net/ethernet/sun/cassini.c | 2 +- drivers/net/ethernet/sun/niu.c | 2 +- drivers/net/ethernet/sun/sunbmac.c | 2 +- drivers/net/ethernet/sun/sungem.c | 2 +- drivers/net/ethernet/sun/sunhme.c | 2 +- drivers/net/ethernet/sun/sunqe.c | 2 +- drivers/net/ethernet/sun/sunvnet_common.c | 2 +- drivers/net/ethernet/sun/sunvnet_common.h | 2 +- drivers/net/ethernet/synopsys/dwc-xlgmac-net.c | 2 +- drivers/net/ethernet/ti/cpmac.c | 2 +- drivers/net/ethernet/ti/cpsw_priv.c | 2 +- drivers/net/ethernet/ti/cpsw_priv.h | 2 +- drivers/net/ethernet/ti/davinci_emac.c | 2 +- drivers/net/ethernet/ti/netcp_core.c | 2 +- drivers/net/ethernet/ti/tlan.c | 6 +++--- drivers/net/ethernet/toshiba/ps3_gelic_net.c | 2 +- drivers/net/ethernet/toshiba/ps3_gelic_net.h | 2 +- drivers/net/ethernet/toshiba/spider_net.c | 2 +- drivers/net/ethernet/toshiba/tc35815.c | 4 ++-- drivers/net/ethernet/via/via-rhine.c | 4 ++-- drivers/net/ethernet/wiznet/w5100.c | 2 +- drivers/net/ethernet/wiznet/w5300.c | 2 +- drivers/net/ethernet/xilinx/xilinx_emaclite.c | 2 +- drivers/net/ethernet/xircom/xirc2ps_cs.c | 4 ++-- drivers/net/fjes/fjes_main.c | 4 ++-- drivers/net/slip/slip.c | 2 +- drivers/net/usb/catc.c | 2 +- drivers/net/usb/hso.c | 2 +- drivers/net/usb/ipheth.c | 2 +- drivers/net/usb/kaweth.c | 2 +- drivers/net/usb/lan78xx.c | 2 +- drivers/net/usb/pegasus.c | 2 +- drivers/net/usb/r8152.c | 2 +- drivers/net/usb/rtl8150.c | 2 +- drivers/net/usb/usbnet.c | 2 +- drivers/net/vmxnet3/vmxnet3_drv.c | 2 +- drivers/net/wan/cosa.c | 4 ++-- drivers/net/wan/farsync.c | 2 +- drivers/net/wan/fsl_ucc_hdlc.c | 2 +- drivers/net/wan/lmc/lmc_main.c | 4 ++-- drivers/net/wan/x25_asy.c | 2 +- drivers/net/wimax/i2400m/netdev.c | 2 +- drivers/net/wireless/intel/ipw2x00/ipw2100.c | 2 +- drivers/net/wireless/intersil/hostap/hostap_main.c | 2 +- drivers/net/wireless/intersil/orinoco/main.c | 2 +- drivers/net/wireless/intersil/orinoco/orinoco.h | 2 +- drivers/net/wireless/intersil/prism54/islpci_eth.c | 2 +- drivers/net/wireless/intersil/prism54/islpci_eth.h | 2 +- drivers/net/wireless/marvell/mwifiex/main.c | 2 +- drivers/net/wireless/quantenna/qtnfmac/core.c | 2 +- drivers/net/wireless/wl3501_cs.c | 2 +- drivers/net/wireless/zydas/zd1201.c | 2 +- drivers/s390/net/qeth_core.h | 2 +- drivers/s390/net/qeth_core_main.c | 2 +- drivers/staging/ks7010/ks_wlan_net.c | 4 ++-- drivers/staging/qlge/qlge_main.c | 2 +- drivers/staging/rtl8192e/rtl8192e/rtl_core.c | 2 +- drivers/staging/rtl8192u/r8192U_core.c | 2 +- drivers/staging/unisys/visornic/visornic_main.c | 2 +- drivers/staging/wlan-ng/p80211netdev.c | 4 ++-- drivers/tty/n_gsm.c | 2 +- drivers/tty/synclink.c | 2 +- drivers/tty/synclink_gt.c | 2 +- drivers/tty/synclinkmp.c | 2 +- include/linux/netdevice.h | 5 +++-- include/linux/usb/usbnet.h | 2 +- net/atm/lec.c | 2 +- net/bluetooth/bnep/netdev.c | 2 +- net/sched/sch_generic.c | 2 +- 236 files changed, 298 insertions(+), 295 deletions(-) (limited to 'include') diff --git a/arch/m68k/emu/nfeth.c b/arch/m68k/emu/nfeth.c index a4ebd2445eda..d2875e32abfc 100644 --- a/arch/m68k/emu/nfeth.c +++ b/arch/m68k/emu/nfeth.c @@ -167,7 +167,7 @@ static int nfeth_xmit(struct sk_buff *skb, struct net_device *dev) return 0; } -static void nfeth_tx_timeout(struct net_device *dev) +static void nfeth_tx_timeout(struct net_device *dev, unsigned int txqueue) { dev->stats.tx_errors++; netif_wake_queue(dev); diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c index 327b728f7244..35ebeebfc1a8 100644 --- a/arch/um/drivers/net_kern.c +++ b/arch/um/drivers/net_kern.c @@ -247,7 +247,7 @@ static void uml_net_set_multicast_list(struct net_device *dev) return; } -static void uml_net_tx_timeout(struct net_device *dev) +static void uml_net_tx_timeout(struct net_device *dev, unsigned int txqueue) { netif_trans_update(dev); netif_wake_queue(dev); diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c index 92617e16829e..0ff86391f77d 100644 --- a/arch/um/drivers/vector_kern.c +++ b/arch/um/drivers/vector_kern.c @@ -1332,7 +1332,7 @@ static void vector_net_set_multicast_list(struct net_device *dev) return; } -static void vector_net_tx_timeout(struct net_device *dev) +static void vector_net_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct vector_private *vp = netdev_priv(dev); diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c index fa9f3893b002..4986226a5ab2 100644 --- a/arch/xtensa/platforms/iss/network.c +++ b/arch/xtensa/platforms/iss/network.c @@ -455,7 +455,7 @@ static void iss_net_set_multicast_list(struct net_device *dev) { } -static void iss_net_tx_timeout(struct net_device *dev) +static void iss_net_tx_timeout(struct net_device *dev, unsigned int txqueue) { } diff --git a/drivers/char/pcmcia/synclink_cs.c b/drivers/char/pcmcia/synclink_cs.c index 82f9a6a814ae..e342daa73d1b 100644 --- a/drivers/char/pcmcia/synclink_cs.c +++ b/drivers/char/pcmcia/synclink_cs.c @@ -4169,7 +4169,7 @@ static int hdlcdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) * * dev pointer to network device structure */ -static void hdlcdev_tx_timeout(struct net_device *dev) +static void hdlcdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { MGSLPC_INFO *info = dev_to_port(dev); unsigned long flags; diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c index e5f438ab716c..4a0d3a9e72e1 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_main.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c @@ -1182,7 +1182,7 @@ unref: return NETDEV_TX_OK; } -static void ipoib_timeout(struct net_device *dev) +static void ipoib_timeout(struct net_device *dev, unsigned int txqueue) { struct ipoib_dev_priv *priv = ipoib_priv(dev); diff --git a/drivers/message/fusion/mptlan.c b/drivers/message/fusion/mptlan.c index ebc00d47abf5..7d3784aa20e5 100644 --- a/drivers/message/fusion/mptlan.c +++ b/drivers/message/fusion/mptlan.c @@ -552,7 +552,7 @@ mpt_lan_close(struct net_device *dev) /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ /* Tx timeout handler. */ static void -mpt_lan_tx_timeout(struct net_device *dev) +mpt_lan_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mpt_lan_priv *priv = netdev_priv(dev); MPT_ADAPTER *mpt_dev = priv->mpt_dev; diff --git a/drivers/misc/sgi-xp/xpnet.c b/drivers/misc/sgi-xp/xpnet.c index f7d610a22347..ada94e6a3c91 100644 --- a/drivers/misc/sgi-xp/xpnet.c +++ b/drivers/misc/sgi-xp/xpnet.c @@ -496,7 +496,7 @@ xpnet_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) * Deal with transmit timeouts coming from the network layer. */ static void -xpnet_dev_tx_timeout(struct net_device *dev) +xpnet_dev_tx_timeout(struct net_device *dev, unsigned int txqueue) { dev->stats.tx_errors++; } diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c index b3c63d2f16aa..18428e104445 100644 --- a/drivers/net/appletalk/cops.c +++ b/drivers/net/appletalk/cops.c @@ -189,7 +189,7 @@ static int cops_nodeid (struct net_device *dev, int nodeid); static irqreturn_t cops_interrupt (int irq, void *dev_id); static void cops_poll(struct timer_list *t); -static void cops_timeout(struct net_device *dev); +static void cops_timeout(struct net_device *dev, unsigned int txqueue); static void cops_rx (struct net_device *dev); static netdev_tx_t cops_send_packet (struct sk_buff *skb, struct net_device *dev); @@ -844,7 +844,7 @@ static void cops_rx(struct net_device *dev) netif_rx(skb); } -static void cops_timeout(struct net_device *dev) +static void cops_timeout(struct net_device *dev, unsigned int txqueue) { struct cops_local *lp = netdev_priv(dev); int ioaddr = dev->base_addr; diff --git a/drivers/net/arcnet/arcdevice.h b/drivers/net/arcnet/arcdevice.h index b0f5bc07aef5..22a49c6d7ae6 100644 --- a/drivers/net/arcnet/arcdevice.h +++ b/drivers/net/arcnet/arcdevice.h @@ -356,7 +356,7 @@ int arcnet_open(struct net_device *dev); int arcnet_close(struct net_device *dev); netdev_tx_t arcnet_send_packet(struct sk_buff *skb, struct net_device *dev); -void arcnet_timeout(struct net_device *dev); +void arcnet_timeout(struct net_device *dev, unsigned int txqueue); /* I/O equivalents */ diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c index 553776cc1d29..e04efc0a5c97 100644 --- a/drivers/net/arcnet/arcnet.c +++ b/drivers/net/arcnet/arcnet.c @@ -763,7 +763,7 @@ static int go_tx(struct net_device *dev) } /* Called by the kernel when transmit times out */ -void arcnet_timeout(struct net_device *dev) +void arcnet_timeout(struct net_device *dev, unsigned int txqueue) { unsigned long flags; struct arcnet_local *lp = netdev_priv(dev); diff --git a/drivers/net/ethernet/3com/3c509.c b/drivers/net/ethernet/3com/3c509.c index 3da97996bdf3..8cafd06ff0c4 100644 --- a/drivers/net/ethernet/3com/3c509.c +++ b/drivers/net/ethernet/3com/3c509.c @@ -196,7 +196,7 @@ static struct net_device_stats *el3_get_stats(struct net_device *dev); static int el3_rx(struct net_device *dev); static int el3_close(struct net_device *dev); static void set_multicast_list(struct net_device *dev); -static void el3_tx_timeout (struct net_device *dev); +static void el3_tx_timeout (struct net_device *dev, unsigned int txqueue); static void el3_down(struct net_device *dev); static void el3_up(struct net_device *dev); static const struct ethtool_ops ethtool_ops; @@ -689,7 +689,7 @@ el3_open(struct net_device *dev) } static void -el3_tx_timeout (struct net_device *dev) +el3_tx_timeout (struct net_device *dev, unsigned int txqueue) { int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/3com/3c515.c b/drivers/net/ethernet/3com/3c515.c index b15752267c8d..1e233e2f0a5a 100644 --- a/drivers/net/ethernet/3com/3c515.c +++ b/drivers/net/ethernet/3com/3c515.c @@ -371,7 +371,7 @@ static void corkscrew_timer(struct timer_list *t); static netdev_tx_t corkscrew_start_xmit(struct sk_buff *skb, struct net_device *dev); static int corkscrew_rx(struct net_device *dev); -static void corkscrew_timeout(struct net_device *dev); +static void corkscrew_timeout(struct net_device *dev, unsigned int txqueue); static int boomerang_rx(struct net_device *dev); static irqreturn_t corkscrew_interrupt(int irq, void *dev_id); static int corkscrew_close(struct net_device *dev); @@ -961,7 +961,7 @@ static void corkscrew_timer(struct timer_list *t) #endif /* AUTOMEDIA */ } -static void corkscrew_timeout(struct net_device *dev) +static void corkscrew_timeout(struct net_device *dev, unsigned int txqueue) { int i; struct corkscrew_private *vp = netdev_priv(dev); diff --git a/drivers/net/ethernet/3com/3c574_cs.c b/drivers/net/ethernet/3com/3c574_cs.c index 3044a6f35f04..ef1c3151fbb2 100644 --- a/drivers/net/ethernet/3com/3c574_cs.c +++ b/drivers/net/ethernet/3com/3c574_cs.c @@ -234,7 +234,7 @@ static void update_stats(struct net_device *dev); static struct net_device_stats *el3_get_stats(struct net_device *dev); static int el3_rx(struct net_device *dev, int worklimit); static int el3_close(struct net_device *dev); -static void el3_tx_timeout(struct net_device *dev); +static void el3_tx_timeout(struct net_device *dev, unsigned int txqueue); static int el3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static void set_rx_mode(struct net_device *dev); static void set_multicast_list(struct net_device *dev); @@ -690,7 +690,7 @@ static int el3_open(struct net_device *dev) return 0; } -static void el3_tx_timeout(struct net_device *dev) +static void el3_tx_timeout(struct net_device *dev, unsigned int txqueue) { unsigned int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/3com/3c589_cs.c b/drivers/net/ethernet/3com/3c589_cs.c index 2b2695311bda..d47cde6c5f08 100644 --- a/drivers/net/ethernet/3com/3c589_cs.c +++ b/drivers/net/ethernet/3com/3c589_cs.c @@ -173,7 +173,7 @@ static void update_stats(struct net_device *dev); static struct net_device_stats *el3_get_stats(struct net_device *dev); static int el3_rx(struct net_device *dev); static int el3_close(struct net_device *dev); -static void el3_tx_timeout(struct net_device *dev); +static void el3_tx_timeout(struct net_device *dev, unsigned int txqueue); static void set_rx_mode(struct net_device *dev); static void set_multicast_list(struct net_device *dev); static const struct ethtool_ops netdev_ethtool_ops; @@ -526,7 +526,7 @@ static int el3_open(struct net_device *dev) return 0; } -static void el3_tx_timeout(struct net_device *dev) +static void el3_tx_timeout(struct net_device *dev, unsigned int txqueue) { unsigned int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c index 8785c2ff3825..fc046797c0ea 100644 --- a/drivers/net/ethernet/3com/3c59x.c +++ b/drivers/net/ethernet/3com/3c59x.c @@ -776,7 +776,7 @@ static void set_rx_mode(struct net_device *dev); #ifdef CONFIG_PCI static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); #endif -static void vortex_tx_timeout(struct net_device *dev); +static void vortex_tx_timeout(struct net_device *dev, unsigned int txqueue); static void acpi_set_WOL(struct net_device *dev); static const struct ethtool_ops vortex_ethtool_ops; static void set_8021q_mode(struct net_device *dev, int enable); @@ -1877,7 +1877,7 @@ leave_media_alone: iowrite16(FakeIntr, ioaddr + EL3_CMD); } -static void vortex_tx_timeout(struct net_device *dev) +static void vortex_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct vortex_private *vp = netdev_priv(dev); void __iomem *ioaddr = vp->ioaddr; diff --git a/drivers/net/ethernet/3com/typhoon.c b/drivers/net/ethernet/3com/typhoon.c index be823c186517..14fce6658106 100644 --- a/drivers/net/ethernet/3com/typhoon.c +++ b/drivers/net/ethernet/3com/typhoon.c @@ -2013,7 +2013,7 @@ typhoon_stop_runtime(struct typhoon *tp, int wait_type) } static void -typhoon_tx_timeout(struct net_device *dev) +typhoon_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct typhoon *tp = netdev_priv(dev); diff --git a/drivers/net/ethernet/8390/8390.c b/drivers/net/ethernet/8390/8390.c index 78f3e532c600..0e0aa4016858 100644 --- a/drivers/net/ethernet/8390/8390.c +++ b/drivers/net/ethernet/8390/8390.c @@ -36,9 +36,9 @@ void ei_set_multicast_list(struct net_device *dev) } EXPORT_SYMBOL(ei_set_multicast_list); -void ei_tx_timeout(struct net_device *dev) +void ei_tx_timeout(struct net_device *dev, unsigned int txqueue) { - __ei_tx_timeout(dev); + __ei_tx_timeout(dev, txqueue); } EXPORT_SYMBOL(ei_tx_timeout); diff --git a/drivers/net/ethernet/8390/8390.h b/drivers/net/ethernet/8390/8390.h index 3e2f2c2e7b58..529c728f334a 100644 --- a/drivers/net/ethernet/8390/8390.h +++ b/drivers/net/ethernet/8390/8390.h @@ -32,7 +32,7 @@ void NS8390_init(struct net_device *dev, int startp); int ei_open(struct net_device *dev); int ei_close(struct net_device *dev); irqreturn_t ei_interrupt(int irq, void *dev_id); -void ei_tx_timeout(struct net_device *dev); +void ei_tx_timeout(struct net_device *dev, unsigned int txqueue); netdev_tx_t ei_start_xmit(struct sk_buff *skb, struct net_device *dev); void ei_set_multicast_list(struct net_device *dev); struct net_device_stats *ei_get_stats(struct net_device *dev); @@ -50,7 +50,7 @@ void NS8390p_init(struct net_device *dev, int startp); int eip_open(struct net_device *dev); int eip_close(struct net_device *dev); irqreturn_t eip_interrupt(int irq, void *dev_id); -void eip_tx_timeout(struct net_device *dev); +void eip_tx_timeout(struct net_device *dev, unsigned int txqueue); netdev_tx_t eip_start_xmit(struct sk_buff *skb, struct net_device *dev); void eip_set_multicast_list(struct net_device *dev); struct net_device_stats *eip_get_stats(struct net_device *dev); diff --git a/drivers/net/ethernet/8390/8390p.c b/drivers/net/ethernet/8390/8390p.c index 6cf36992a2c6..6834742057b3 100644 --- a/drivers/net/ethernet/8390/8390p.c +++ b/drivers/net/ethernet/8390/8390p.c @@ -41,9 +41,9 @@ void eip_set_multicast_list(struct net_device *dev) } EXPORT_SYMBOL(eip_set_multicast_list); -void eip_tx_timeout(struct net_device *dev) +void eip_tx_timeout(struct net_device *dev, unsigned int txqueue) { - __ei_tx_timeout(dev); + __ei_tx_timeout(dev, txqueue); } EXPORT_SYMBOL(eip_tx_timeout); diff --git a/drivers/net/ethernet/8390/axnet_cs.c b/drivers/net/ethernet/8390/axnet_cs.c index 0b6bbf63f7ca..aeae7966a082 100644 --- a/drivers/net/ethernet/8390/axnet_cs.c +++ b/drivers/net/ethernet/8390/axnet_cs.c @@ -83,7 +83,7 @@ static netdev_tx_t axnet_start_xmit(struct sk_buff *skb, struct net_device *dev); static struct net_device_stats *get_stats(struct net_device *dev); static void set_multicast_list(struct net_device *dev); -static void axnet_tx_timeout(struct net_device *dev); +static void axnet_tx_timeout(struct net_device *dev, unsigned int txqueue); static irqreturn_t ei_irq_wrapper(int irq, void *dev_id); static void ei_watchdog(struct timer_list *t); static void axnet_reset_8390(struct net_device *dev); @@ -903,7 +903,7 @@ static int ax_close(struct net_device *dev) * completed (or failed) - i.e. never posted a Tx related interrupt. */ -static void axnet_tx_timeout(struct net_device *dev) +static void axnet_tx_timeout(struct net_device *dev, unsigned int txqueue) { long e8390_base = dev->base_addr; struct ei_device *ei_local = netdev_priv(dev); diff --git a/drivers/net/ethernet/8390/lib8390.c b/drivers/net/ethernet/8390/lib8390.c index c9c55c9eab9f..babc92e2692e 100644 --- a/drivers/net/ethernet/8390/lib8390.c +++ b/drivers/net/ethernet/8390/lib8390.c @@ -251,7 +251,7 @@ static int __ei_close(struct net_device *dev) * completed (or failed) - i.e. never posted a Tx related interrupt. */ -static void __ei_tx_timeout(struct net_device *dev) +static void __ei_tx_timeout(struct net_device *dev, unsigned int txqueue) { unsigned long e8390_base = dev->base_addr; struct ei_device *ei_local = netdev_priv(dev); diff --git a/drivers/net/ethernet/adaptec/starfire.c b/drivers/net/ethernet/adaptec/starfire.c index 816540e6beac..165d18405b0c 100644 --- a/drivers/net/ethernet/adaptec/starfire.c +++ b/drivers/net/ethernet/adaptec/starfire.c @@ -576,7 +576,7 @@ static int mdio_read(struct net_device *dev, int phy_id, int location); static void mdio_write(struct net_device *dev, int phy_id, int location, int value); static int netdev_open(struct net_device *dev); static void check_duplex(struct net_device *dev); -static void tx_timeout(struct net_device *dev); +static void tx_timeout(struct net_device *dev, unsigned int txqueue); static void init_ring(struct net_device *dev); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); static irqreturn_t intr_handler(int irq, void *dev_instance); @@ -1105,7 +1105,7 @@ static void check_duplex(struct net_device *dev) } -static void tx_timeout(struct net_device *dev) +static void tx_timeout(struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem *ioaddr = np->base; diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c index 174344c450af..3c51d8c502ed 100644 --- a/drivers/net/ethernet/agere/et131x.c +++ b/drivers/net/ethernet/agere/et131x.c @@ -3811,7 +3811,7 @@ drop_err: * specified by the 'tx_timeo" element in the net_device structure (see * et131x_alloc_device() to see how this value is set). */ -static void et131x_tx_timeout(struct net_device *netdev) +static void et131x_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct et131x_adapter *adapter = netdev_priv(netdev); struct tx_ring *tx_ring = &adapter->tx_ring; diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c index 0537df06a9b5..5ea806423e4c 100644 --- a/drivers/net/ethernet/allwinner/sun4i-emac.c +++ b/drivers/net/ethernet/allwinner/sun4i-emac.c @@ -407,7 +407,7 @@ static void emac_init_device(struct net_device *dev) } /* Our watchdog timed out. Called by the networking layer */ -static void emac_timeout(struct net_device *dev) +static void emac_timeout(struct net_device *dev, unsigned int txqueue) { struct emac_board_info *db = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/alteon/acenic.c b/drivers/net/ethernet/alteon/acenic.c index 46b4207d3266..f366faf88eee 100644 --- a/drivers/net/ethernet/alteon/acenic.c +++ b/drivers/net/ethernet/alteon/acenic.c @@ -437,7 +437,7 @@ static const struct ethtool_ops ace_ethtool_ops = { .set_link_ksettings = ace_set_link_ksettings, }; -static void ace_watchdog(struct net_device *dev); +static void ace_watchdog(struct net_device *dev, unsigned int txqueue); static const struct net_device_ops ace_netdev_ops = { .ndo_open = ace_open, @@ -1542,7 +1542,7 @@ static void ace_set_rxtx_parms(struct net_device *dev, int jumbo) } -static void ace_watchdog(struct net_device *data) +static void ace_watchdog(struct net_device *data, unsigned int txqueue) { struct net_device *dev = data; struct ace_private *ap = netdev_priv(dev); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index d161537e24e2..26954fde4766 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -108,7 +108,7 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring, static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter, int first_index, int count); -static void ena_tx_timeout(struct net_device *dev) +static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ena_adapter *adapter = netdev_priv(dev); diff --git a/drivers/net/ethernet/amd/7990.c b/drivers/net/ethernet/amd/7990.c index ab30761003da..cf3562e82ca9 100644 --- a/drivers/net/ethernet/amd/7990.c +++ b/drivers/net/ethernet/amd/7990.c @@ -527,7 +527,7 @@ int lance_close(struct net_device *dev) } EXPORT_SYMBOL_GPL(lance_close); -void lance_tx_timeout(struct net_device *dev) +void lance_tx_timeout(struct net_device *dev, unsigned int txqueue) { printk("lance_tx_timeout\n"); lance_reset(dev); diff --git a/drivers/net/ethernet/amd/7990.h b/drivers/net/ethernet/amd/7990.h index 741cdc392c6b..8266b3c1fefc 100644 --- a/drivers/net/ethernet/amd/7990.h +++ b/drivers/net/ethernet/amd/7990.h @@ -243,7 +243,7 @@ int lance_open(struct net_device *dev); int lance_close(struct net_device *dev); int lance_start_xmit(struct sk_buff *skb, struct net_device *dev); void lance_set_multicast(struct net_device *dev); -void lance_tx_timeout(struct net_device *dev); +void lance_tx_timeout(struct net_device *dev, unsigned int txqueue); #ifdef CONFIG_NET_POLL_CONTROLLER void lance_poll(struct net_device *dev); #endif diff --git a/drivers/net/ethernet/amd/a2065.c b/drivers/net/ethernet/amd/a2065.c index 212fe72a190b..a3faf4feb204 100644 --- a/drivers/net/ethernet/amd/a2065.c +++ b/drivers/net/ethernet/amd/a2065.c @@ -522,7 +522,7 @@ static inline int lance_reset(struct net_device *dev) return status; } -static void lance_tx_timeout(struct net_device *dev) +static void lance_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct lance_private *lp = netdev_priv(dev); volatile struct lance_regs *ll = lp->ll; diff --git a/drivers/net/ethernet/amd/am79c961a.c b/drivers/net/ethernet/amd/am79c961a.c index 0842da492a64..1c53408f5d47 100644 --- a/drivers/net/ethernet/amd/am79c961a.c +++ b/drivers/net/ethernet/amd/am79c961a.c @@ -422,7 +422,7 @@ static void am79c961_setmulticastlist (struct net_device *dev) spin_unlock_irqrestore(&priv->chip_lock, flags); } -static void am79c961_timeout(struct net_device *dev) +static void am79c961_timeout(struct net_device *dev, unsigned int txqueue) { printk(KERN_WARNING "%s: transmit timed out, network cable problem?\n", dev->name); diff --git a/drivers/net/ethernet/amd/amd8111e.c b/drivers/net/ethernet/amd/amd8111e.c index 573e88fc8ede..0f3b743425e8 100644 --- a/drivers/net/ethernet/amd/amd8111e.c +++ b/drivers/net/ethernet/amd/amd8111e.c @@ -1569,7 +1569,7 @@ static int amd8111e_enable_link_change(struct amd8111e_priv *lp) * failed or the interface is locked up. This function will reinitialize * the hardware. */ -static void amd8111e_tx_timeout(struct net_device *dev) +static void amd8111e_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct amd8111e_priv *lp = netdev_priv(dev); int err; diff --git a/drivers/net/ethernet/amd/ariadne.c b/drivers/net/ethernet/amd/ariadne.c index 4b6a5cb85dd2..5e0f645f5bde 100644 --- a/drivers/net/ethernet/amd/ariadne.c +++ b/drivers/net/ethernet/amd/ariadne.c @@ -530,7 +530,7 @@ static inline void ariadne_reset(struct net_device *dev) netif_start_queue(dev); } -static void ariadne_tx_timeout(struct net_device *dev) +static void ariadne_tx_timeout(struct net_device *dev, unsigned int txqueue) { volatile struct Am79C960 *lance = (struct Am79C960 *)dev->base_addr; diff --git a/drivers/net/ethernet/amd/atarilance.c b/drivers/net/ethernet/amd/atarilance.c index d3d44e07afbc..4e36122609a3 100644 --- a/drivers/net/ethernet/amd/atarilance.c +++ b/drivers/net/ethernet/amd/atarilance.c @@ -346,7 +346,7 @@ static int lance_rx( struct net_device *dev ); static int lance_close( struct net_device *dev ); static void set_multicast_list( struct net_device *dev ); static int lance_set_mac_address( struct net_device *dev, void *addr ); -static void lance_tx_timeout (struct net_device *dev); +static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue); /************************* End of Prototypes **************************/ @@ -727,7 +727,7 @@ static void lance_init_ring( struct net_device *dev ) /* XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX */ -static void lance_tx_timeout (struct net_device *dev) +static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct lance_private *lp = netdev_priv(dev); struct lance_ioreg *IO = lp->iobase; diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c index 1793950f0582..d832c9f4d306 100644 --- a/drivers/net/ethernet/amd/au1000_eth.c +++ b/drivers/net/ethernet/amd/au1000_eth.c @@ -1014,7 +1014,7 @@ static netdev_tx_t au1000_tx(struct sk_buff *skb, struct net_device *dev) * The Tx ring has been full longer than the watchdog timeout * value. The transmitter must be hung? */ -static void au1000_tx_timeout(struct net_device *dev) +static void au1000_tx_timeout(struct net_device *dev, unsigned int txqueue) { netdev_err(dev, "au1000_tx_timeout: dev=%p\n", dev); au1000_reset_mac(dev); diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c index dac4a2fcad6a..6592a2db9efb 100644 --- a/drivers/net/ethernet/amd/declance.c +++ b/drivers/net/ethernet/amd/declance.c @@ -884,7 +884,7 @@ static inline int lance_reset(struct net_device *dev) return status; } -static void lance_tx_timeout(struct net_device *dev) +static void lance_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct lance_private *lp = netdev_priv(dev); volatile struct lance_regs *ll = lp->ll; diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c index f90b454b1642..aff44241988c 100644 --- a/drivers/net/ethernet/amd/lance.c +++ b/drivers/net/ethernet/amd/lance.c @@ -306,7 +306,7 @@ static irqreturn_t lance_interrupt(int irq, void *dev_id); static int lance_close(struct net_device *dev); static struct net_device_stats *lance_get_stats(struct net_device *dev); static void set_multicast_list(struct net_device *dev); -static void lance_tx_timeout (struct net_device *dev); +static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue); @@ -913,7 +913,7 @@ lance_restart(struct net_device *dev, unsigned int csr0_bits, int must_reinit) } -static void lance_tx_timeout (struct net_device *dev) +static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct lance_private *lp = (struct lance_private *) dev->ml_priv; int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/amd/ni65.c b/drivers/net/ethernet/amd/ni65.c index c6c2a54c1121..c38edf6f03a3 100644 --- a/drivers/net/ethernet/amd/ni65.c +++ b/drivers/net/ethernet/amd/ni65.c @@ -254,7 +254,7 @@ static int ni65_lance_reinit(struct net_device *dev); static void ni65_init_lance(struct priv *p,unsigned char*,int,int); static netdev_tx_t ni65_send_packet(struct sk_buff *skb, struct net_device *dev); -static void ni65_timeout(struct net_device *dev); +static void ni65_timeout(struct net_device *dev, unsigned int txqueue); static int ni65_close(struct net_device *dev); static int ni65_alloc_buffer(struct net_device *dev); static void ni65_free_buffer(struct priv *p); @@ -1133,7 +1133,7 @@ static void ni65_recv_intr(struct net_device *dev,int csr0) * kick xmitter .. */ -static void ni65_timeout(struct net_device *dev) +static void ni65_timeout(struct net_device *dev, unsigned int txqueue) { int i; struct priv *p = dev->ml_priv; diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c index 9c152d85840d..023aecf6ab30 100644 --- a/drivers/net/ethernet/amd/nmclan_cs.c +++ b/drivers/net/ethernet/amd/nmclan_cs.c @@ -407,7 +407,7 @@ static int mace_open(struct net_device *dev); static int mace_close(struct net_device *dev); static netdev_tx_t mace_start_xmit(struct sk_buff *skb, struct net_device *dev); -static void mace_tx_timeout(struct net_device *dev); +static void mace_tx_timeout(struct net_device *dev, unsigned int txqueue); static irqreturn_t mace_interrupt(int irq, void *dev_id); static struct net_device_stats *mace_get_stats(struct net_device *dev); static int mace_rx(struct net_device *dev, unsigned char RxCnt); @@ -837,7 +837,7 @@ mace_start_xmit failed, put skb back into a list." ---------------------------------------------------------------------------- */ -static void mace_tx_timeout(struct net_device *dev) +static void mace_tx_timeout(struct net_device *dev, unsigned int txqueue) { mace_private *lp = netdev_priv(dev); struct pcmcia_device *link = lp->p_dev; diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c index f5ad12c10934..dc7d88227e76 100644 --- a/drivers/net/ethernet/amd/pcnet32.c +++ b/drivers/net/ethernet/amd/pcnet32.c @@ -314,7 +314,7 @@ static int pcnet32_open(struct net_device *); static int pcnet32_init_ring(struct net_device *); static netdev_tx_t pcnet32_start_xmit(struct sk_buff *, struct net_device *); -static void pcnet32_tx_timeout(struct net_device *dev); +static void pcnet32_tx_timeout(struct net_device *dev, unsigned int txqueue); static irqreturn_t pcnet32_interrupt(int, void *); static int pcnet32_close(struct net_device *); static struct net_device_stats *pcnet32_get_stats(struct net_device *); @@ -2455,7 +2455,7 @@ static void pcnet32_restart(struct net_device *dev, unsigned int csr0_bits) lp->a->write_csr(ioaddr, CSR0, csr0_bits); } -static void pcnet32_tx_timeout(struct net_device *dev) +static void pcnet32_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct pcnet32_private *lp = netdev_priv(dev); unsigned long ioaddr = dev->base_addr, flags; diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c index ebcbf8ca4829..b00e00881253 100644 --- a/drivers/net/ethernet/amd/sunlance.c +++ b/drivers/net/ethernet/amd/sunlance.c @@ -1097,7 +1097,7 @@ static void lance_piozero(void __iomem *dest, int len) sbus_writeb(0, piobuf); } -static void lance_tx_timeout(struct net_device *dev) +static void lance_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct lance_private *lp = netdev_priv(dev); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c index 98f8f2033154..b71f9b04a51e 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c @@ -2152,7 +2152,7 @@ static int xgbe_change_mtu(struct net_device *netdev, int mtu) return 0; } -static void xgbe_tx_timeout(struct net_device *netdev) +static void xgbe_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct xgbe_prv_data *pdata = netdev_priv(netdev); diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c index 02b4f3af02b5..c48f60996761 100644 --- a/drivers/net/ethernet/apm/xgene-v2/main.c +++ b/drivers/net/ethernet/apm/xgene-v2/main.c @@ -575,7 +575,7 @@ static void xge_free_pending_skb(struct net_device *ndev) } } -static void xge_timeout(struct net_device *ndev) +static void xge_timeout(struct net_device *ndev, unsigned int txqueue) { struct xge_pdata *pdata = netdev_priv(ndev); diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c index d8612131c55e..e284b6753725 100644 --- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c +++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c @@ -859,7 +859,7 @@ static int xgene_enet_napi(struct napi_struct *napi, const int budget) return processed; } -static void xgene_enet_timeout(struct net_device *ndev) +static void xgene_enet_timeout(struct net_device *ndev, unsigned int txqueue) { struct xgene_enet_pdata *pdata = netdev_priv(ndev); struct netdev_queue *txq; diff --git a/drivers/net/ethernet/apple/macmace.c b/drivers/net/ethernet/apple/macmace.c index 8d03578d5e8c..95d3061c61be 100644 --- a/drivers/net/ethernet/apple/macmace.c +++ b/drivers/net/ethernet/apple/macmace.c @@ -91,7 +91,7 @@ static int mace_set_address(struct net_device *dev, void *addr); static void mace_reset(struct net_device *dev); static irqreturn_t mace_interrupt(int irq, void *dev_id); static irqreturn_t mace_dma_intr(int irq, void *dev_id); -static void mace_tx_timeout(struct net_device *dev); +static void mace_tx_timeout(struct net_device *dev, unsigned int txqueue); static void __mace_set_address(struct net_device *dev, void *addr); /* @@ -600,7 +600,7 @@ static irqreturn_t mace_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } -static void mace_tx_timeout(struct net_device *dev) +static void mace_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mace_data *mp = netdev_priv(dev); volatile struct mace *mb = mp->mace; diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c index 8f5021091eee..ad8b0e3fcd2c 100644 --- a/drivers/net/ethernet/atheros/ag71xx.c +++ b/drivers/net/ethernet/atheros/ag71xx.c @@ -1409,7 +1409,7 @@ static void ag71xx_oom_timer_handler(struct timer_list *t) napi_schedule(&ag->napi); } -static void ag71xx_tx_timeout(struct net_device *ndev) +static void ag71xx_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct ag71xx *ag = netdev_priv(ndev); diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c index d4bbcdfd691a..1dcbc486eca9 100644 --- a/drivers/net/ethernet/atheros/alx/main.c +++ b/drivers/net/ethernet/atheros/alx/main.c @@ -1553,7 +1553,7 @@ static netdev_tx_t alx_start_xmit(struct sk_buff *skb, return alx_start_xmit_ring(skb, alx_tx_queue_mapping(alx, skb)); } -static void alx_tx_timeout(struct net_device *dev) +static void alx_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct alx_priv *alx = netdev_priv(dev); diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c index 2b239ecea05f..4c0b1f8551dd 100644 --- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c +++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c @@ -350,7 +350,7 @@ static void atl1c_del_timer(struct atl1c_adapter *adapter) * atl1c_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ -static void atl1c_tx_timeout(struct net_device *netdev) +static void atl1c_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct atl1c_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c index 4f7b65825c15..e0d89942d537 100644 --- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c +++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c @@ -251,7 +251,7 @@ static void atl1e_cancel_work(struct atl1e_adapter *adapter) * atl1e_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ -static void atl1e_tx_timeout(struct net_device *netdev) +static void atl1e_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct atl1e_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/atheros/atlx/atl2.c b/drivers/net/ethernet/atheros/atlx/atl2.c index 3aba38322717..b81a4e0c5b57 100644 --- a/drivers/net/ethernet/atheros/atlx/atl2.c +++ b/drivers/net/ethernet/atheros/atlx/atl2.c @@ -1001,7 +1001,7 @@ static int atl2_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) * atl2_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ -static void atl2_tx_timeout(struct net_device *netdev) +static void atl2_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct atl2_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/atheros/atlx/atlx.c b/drivers/net/ethernet/atheros/atlx/atlx.c index 505a22c703f7..0941d07d0833 100644 --- a/drivers/net/ethernet/atheros/atlx/atlx.c +++ b/drivers/net/ethernet/atheros/atlx/atlx.c @@ -183,7 +183,7 @@ static void atlx_clear_phy_int(struct atlx_adapter *adapter) * atlx_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ -static void atlx_tx_timeout(struct net_device *netdev) +static void atlx_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct atlx_adapter *adapter = netdev_priv(netdev); /* Do the reset outside of interrupt context */ diff --git a/drivers/net/ethernet/broadcom/b44.c b/drivers/net/ethernet/broadcom/b44.c index 035dbb1b2c98..5b3464c3e8d1 100644 --- a/drivers/net/ethernet/broadcom/b44.c +++ b/drivers/net/ethernet/broadcom/b44.c @@ -948,7 +948,7 @@ irq_ack: return IRQ_RETVAL(handled); } -static void b44_tx_timeout(struct net_device *dev) +static void b44_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct b44 *bp = netdev_priv(dev); diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 825af709708e..8e3152779a61 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -1354,7 +1354,7 @@ out: return ret; } -static void bcm_sysport_tx_timeout(struct net_device *dev) +static void bcm_sysport_tx_timeout(struct net_device *dev, unsigned int txqueue) { netdev_warn(dev, "transmit timeout!\n"); diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c index fbc196b480b6..dbb7874607ca 100644 --- a/drivers/net/ethernet/broadcom/bnx2.c +++ b/drivers/net/ethernet/broadcom/bnx2.c @@ -6575,7 +6575,7 @@ bnx2_dump_state(struct bnx2 *bp) } static void -bnx2_tx_timeout(struct net_device *dev) +bnx2_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct bnx2 *bp = netdev_priv(dev); diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 5e037a305b83..ee9e9290f112 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -4970,7 +4970,7 @@ int bnx2x_set_features(struct net_device *dev, netdev_features_t features) return 0; } -void bnx2x_tx_timeout(struct net_device *dev) +void bnx2x_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct bnx2x *bp = netdev_priv(dev); diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h index 8b08cb18e363..e35f48bfdc85 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h @@ -617,7 +617,7 @@ int bnx2x_set_features(struct net_device *dev, netdev_features_t features); * * @dev: net device */ -void bnx2x_tx_timeout(struct net_device *dev); +void bnx2x_tx_timeout(struct net_device *dev, unsigned int txqueue); /** bnx2x_get_c2s_mapping - read inner-to-outer vlan configuration * c2s_map should have BNX2X_MAX_PRIORITY entries. diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 85983f0e3134..4e34841906c7 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -9976,7 +9976,7 @@ static void bnxt_reset_task(struct bnxt *bp, bool silent) } } -static void bnxt_tx_timeout(struct net_device *dev) +static void bnxt_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct bnxt *bp = netdev_priv(dev); diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 120fa05a39ff..32f1245a69e2 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -3055,7 +3055,7 @@ static void bcmgenet_dump_tx_queue(struct bcmgenet_tx_ring *ring) ring->cb_ptr, ring->end_ptr); } -static void bcmgenet_timeout(struct net_device *dev) +static void bcmgenet_timeout(struct net_device *dev, unsigned int txqueue) { struct bcmgenet_priv *priv = netdev_priv(dev); u32 int0_enable = 0; diff --git a/drivers/net/ethernet/broadcom/sb1250-mac.c b/drivers/net/ethernet/broadcom/sb1250-mac.c index 1604ad32e920..80ff52527233 100644 --- a/drivers/net/ethernet/broadcom/sb1250-mac.c +++ b/drivers/net/ethernet/broadcom/sb1250-mac.c @@ -294,7 +294,7 @@ static int sbmac_set_duplex(struct sbmac_softc *s, enum sbmac_duplex duplex, enum sbmac_fc fc); static int sbmac_open(struct net_device *dev); -static void sbmac_tx_timeout (struct net_device *dev); +static void sbmac_tx_timeout (struct net_device *dev, unsigned int txqueue); static void sbmac_set_rx_mode(struct net_device *dev); static int sbmac_mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static int sbmac_close(struct net_device *dev); @@ -2419,7 +2419,7 @@ static void sbmac_mii_poll(struct net_device *dev) } -static void sbmac_tx_timeout (struct net_device *dev) +static void sbmac_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct sbmac_softc *sc = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c index ca3aa1250dd1..460b4992914a 100644 --- a/drivers/net/ethernet/broadcom/tg3.c +++ b/drivers/net/ethernet/broadcom/tg3.c @@ -7645,7 +7645,7 @@ static void tg3_poll_controller(struct net_device *dev) } #endif -static void tg3_tx_timeout(struct net_device *dev) +static void tg3_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct tg3 *tp = netdev_priv(dev); diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c index af04a2c81adb..05a3d067c3fc 100644 --- a/drivers/net/ethernet/calxeda/xgmac.c +++ b/drivers/net/ethernet/calxeda/xgmac.c @@ -1251,7 +1251,7 @@ static int xgmac_poll(struct napi_struct *napi, int budget) * netdev structure and arrange for the device to be reset to a sane state * in order to transmit a new packet. */ -static void xgmac_tx_timeout(struct net_device *dev) +static void xgmac_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct xgmac_priv *priv = netdev_priv(dev); schedule_work(&priv->tx_timeout_work); diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c index 7f3b2e3b0868..eab05b5534ea 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c @@ -2562,7 +2562,7 @@ lio_xmit_failed: /** \brief Network device Tx timeout * @param netdev pointer to network device */ -static void liquidio_tx_timeout(struct net_device *netdev) +static void liquidio_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct lio *lio; diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c index 370d76822ee0..7a77544a54f5 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c @@ -1628,7 +1628,7 @@ lio_xmit_failed: /** \brief Network device Tx timeout * @param netdev pointer to network device */ -static void liquidio_tx_timeout(struct net_device *netdev) +static void liquidio_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct lio *lio; diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c index f3f2e71431ac..600de587d7a9 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c @@ -31,7 +31,7 @@ static int lio_vf_rep_open(struct net_device *ndev); static int lio_vf_rep_stop(struct net_device *ndev); static netdev_tx_t lio_vf_rep_pkt_xmit(struct sk_buff *skb, struct net_device *ndev); -static void lio_vf_rep_tx_timeout(struct net_device *netdev); +static void lio_vf_rep_tx_timeout(struct net_device *netdev, unsigned int txqueue); static int lio_vf_rep_phys_port_name(struct net_device *dev, char *buf, size_t len); static void lio_vf_rep_get_stats64(struct net_device *dev, @@ -172,7 +172,7 @@ lio_vf_rep_stop(struct net_device *ndev) } static void -lio_vf_rep_tx_timeout(struct net_device *ndev) +lio_vf_rep_tx_timeout(struct net_device *ndev, unsigned int txqueue) { netif_trans_update(ndev); diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index f28409279ea4..016957285f99 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -1741,7 +1741,7 @@ static void nicvf_get_stats64(struct net_device *netdev, } -static void nicvf_tx_timeout(struct net_device *dev) +static void nicvf_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct nicvf *nic = netdev_priv(dev); diff --git a/drivers/net/ethernet/cirrus/cs89x0.c b/drivers/net/ethernet/cirrus/cs89x0.c index c9aebcde403a..33ace3307059 100644 --- a/drivers/net/ethernet/cirrus/cs89x0.c +++ b/drivers/net/ethernet/cirrus/cs89x0.c @@ -1128,7 +1128,7 @@ net_get_stats(struct net_device *dev) return &dev->stats; } -static void net_timeout(struct net_device *dev) +static void net_timeout(struct net_device *dev, unsigned int txqueue) { /* If we get here, some higher level has decided we are broken. There should really be a "kick me" function call instead. */ diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c index acb2856936d2..bbd7b3175f09 100644 --- a/drivers/net/ethernet/cisco/enic/enic_main.c +++ b/drivers/net/ethernet/cisco/enic/enic_main.c @@ -1095,7 +1095,7 @@ static void enic_set_rx_mode(struct net_device *netdev) } /* netif_tx_lock held, BHs disabled */ -static void enic_tx_timeout(struct net_device *netdev) +static void enic_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct enic *enic = netdev_priv(netdev); schedule_work(&enic->tx_hang_reset); diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c index a8f4c69252ff..de0b6e066eef 100644 --- a/drivers/net/ethernet/cortina/gemini.c +++ b/drivers/net/ethernet/cortina/gemini.c @@ -1296,7 +1296,7 @@ out_drop: return NETDEV_TX_OK; } -static void gmac_tx_timeout(struct net_device *netdev) +static void gmac_tx_timeout(struct net_device *netdev, unsigned int txqueue) { netdev_err(netdev, "Tx timeout\n"); gmac_dump_dma_state(netdev); diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c index cce90b5925d9..1ea3372775e6 100644 --- a/drivers/net/ethernet/davicom/dm9000.c +++ b/drivers/net/ethernet/davicom/dm9000.c @@ -964,7 +964,7 @@ dm9000_init_dm9000(struct net_device *dev) } /* Our watchdog timed out. Called by the networking layer */ -static void dm9000_timeout(struct net_device *dev) +static void dm9000_timeout(struct net_device *dev, unsigned int txqueue) { struct board_info *db = netdev_priv(dev); u8 reg_save; diff --git a/drivers/net/ethernet/dec/tulip/de2104x.c b/drivers/net/ethernet/dec/tulip/de2104x.c index f1a2da15dd0a..fd3c2abf74b5 100644 --- a/drivers/net/ethernet/dec/tulip/de2104x.c +++ b/drivers/net/ethernet/dec/tulip/de2104x.c @@ -1436,7 +1436,7 @@ static int de_close (struct net_device *dev) return 0; } -static void de_tx_timeout (struct net_device *dev) +static void de_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct de_private *de = netdev_priv(dev); const int irq = de->pdev->irq; diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c index 3e3e08698876..9e9d9eee29d9 100644 --- a/drivers/net/ethernet/dec/tulip/tulip_core.c +++ b/drivers/net/ethernet/dec/tulip/tulip_core.c @@ -255,7 +255,7 @@ MODULE_DEVICE_TABLE(pci, tulip_pci_tbl); const char tulip_media_cap[32] = {0,0,0,16, 3,19,16,24, 27,4,7,5, 0,20,23,20, 28,31,0,0, }; -static void tulip_tx_timeout(struct net_device *dev); +static void tulip_tx_timeout(struct net_device *dev, unsigned int txqueue); static void tulip_init_ring(struct net_device *dev); static void tulip_free_ring(struct net_device *dev); static netdev_tx_t tulip_start_xmit(struct sk_buff *skb, @@ -534,7 +534,7 @@ free_ring: } -static void tulip_tx_timeout(struct net_device *dev) +static void tulip_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct tulip_private *tp = netdev_priv(dev); void __iomem *ioaddr = tp->base_addr; diff --git a/drivers/net/ethernet/dec/tulip/winbond-840.c b/drivers/net/ethernet/dec/tulip/winbond-840.c index 70cb2d689c2c..7f136488e67c 100644 --- a/drivers/net/ethernet/dec/tulip/winbond-840.c +++ b/drivers/net/ethernet/dec/tulip/winbond-840.c @@ -331,7 +331,7 @@ static void netdev_timer(struct timer_list *t); static void init_rxtx_rings(struct net_device *dev); static void free_rxtx_rings(struct netdev_private *np); static void init_registers(struct net_device *dev); -static void tx_timeout(struct net_device *dev); +static void tx_timeout(struct net_device *dev, unsigned int txqueue); static int alloc_ringdesc(struct net_device *dev); static void free_ringdesc(struct netdev_private *np); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); @@ -921,7 +921,7 @@ static void init_registers(struct net_device *dev) iowrite32(0, ioaddr + RxStartDemand); } -static void tx_timeout(struct net_device *dev) +static void tx_timeout(struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem *ioaddr = np->base_addr; diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c index 55e720d2ea0c..26c5da032b1e 100644 --- a/drivers/net/ethernet/dlink/dl2k.c +++ b/drivers/net/ethernet/dlink/dl2k.c @@ -66,7 +66,7 @@ static const int multicast_filter_limit = 0x40; static int rio_open (struct net_device *dev); static void rio_timer (struct timer_list *t); -static void rio_tx_timeout (struct net_device *dev); +static void rio_tx_timeout (struct net_device *dev, unsigned int txqueue); static netdev_tx_t start_xmit (struct sk_buff *skb, struct net_device *dev); static irqreturn_t rio_interrupt (int irq, void *dev_instance); static void rio_free_tx (struct net_device *dev, int irq); @@ -696,7 +696,7 @@ rio_timer (struct timer_list *t) } static void -rio_tx_timeout (struct net_device *dev) +rio_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem *ioaddr = np->ioaddr; diff --git a/drivers/net/ethernet/dlink/sundance.c b/drivers/net/ethernet/dlink/sundance.c index 4a37a69764ce..b91387c456ba 100644 --- a/drivers/net/ethernet/dlink/sundance.c +++ b/drivers/net/ethernet/dlink/sundance.c @@ -432,7 +432,7 @@ static int mdio_wait_link(struct net_device *dev, int wait); static int netdev_open(struct net_device *dev); static void check_duplex(struct net_device *dev); static void netdev_timer(struct timer_list *t); -static void tx_timeout(struct net_device *dev); +static void tx_timeout(struct net_device *dev, unsigned int txqueue); static void init_ring(struct net_device *dev); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); static int reset_tx (struct net_device *dev); @@ -969,7 +969,7 @@ static void netdev_timer(struct timer_list *t) add_timer(&np->timer); } -static void tx_timeout(struct net_device *dev) +static void tx_timeout(struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem *ioaddr = np->base; diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c index 39eb7d525043..56f59db6ebf2 100644 --- a/drivers/net/ethernet/emulex/benet/be_main.c +++ b/drivers/net/ethernet/emulex/benet/be_main.c @@ -1417,7 +1417,7 @@ drop: return NETDEV_TX_OK; } -static void be_tx_timeout(struct net_device *netdev) +static void be_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct be_adapter *adapter = netdev_priv(netdev); struct device *dev = &adapter->pdev->dev; diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c index ea4f17f5cce7..66406da16b60 100644 --- a/drivers/net/ethernet/ethoc.c +++ b/drivers/net/ethernet/ethoc.c @@ -869,7 +869,7 @@ static int ethoc_change_mtu(struct net_device *dev, int new_mtu) return -ENOSYS; } -static void ethoc_tx_timeout(struct net_device *dev) +static void ethoc_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ethoc *priv = netdev_priv(dev); u32 pending = ethoc_read(priv, INT_SOURCE); diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c index 8ed85037f021..48b3b72fe02e 100644 --- a/drivers/net/ethernet/faraday/ftgmac100.c +++ b/drivers/net/ethernet/faraday/ftgmac100.c @@ -1545,7 +1545,7 @@ static int ftgmac100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int return phy_mii_ioctl(netdev->phydev, ifr, cmd); } -static void ftgmac100_tx_timeout(struct net_device *netdev) +static void ftgmac100_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ftgmac100 *priv = netdev_priv(netdev); diff --git a/drivers/net/ethernet/fealnx.c b/drivers/net/ethernet/fealnx.c index c24fd56a2c71..84f10970299a 100644 --- a/drivers/net/ethernet/fealnx.c +++ b/drivers/net/ethernet/fealnx.c @@ -428,7 +428,7 @@ static void getlinktype(struct net_device *dev); static void getlinkstatus(struct net_device *dev); static void netdev_timer(struct timer_list *t); static void reset_timer(struct timer_list *t); -static void fealnx_tx_timeout(struct net_device *dev); +static void fealnx_tx_timeout(struct net_device *dev, unsigned int txqueue); static void init_ring(struct net_device *dev); static netdev_tx_t start_tx(struct sk_buff *skb, struct net_device *dev); static irqreturn_t intr_handler(int irq, void *dev_instance); @@ -1191,7 +1191,7 @@ static void reset_timer(struct timer_list *t) } -static void fealnx_tx_timeout(struct net_device *dev) +static void fealnx_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem *ioaddr = np->mem; diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 6a9d12dad5d9..a60fc3cfc06e 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -288,7 +288,7 @@ static int dpaa_stop(struct net_device *net_dev) return err; } -static void dpaa_tx_timeout(struct net_device *net_dev) +static void dpaa_tx_timeout(struct net_device *net_dev, unsigned int txqueue) { struct dpaa_percpu_priv *percpu_priv; const struct dpaa_priv *priv; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 05c1899f6628..798fed37be46 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1141,7 +1141,7 @@ fec_stop(struct net_device *ndev) static void -fec_timeout(struct net_device *ndev) +fec_timeout(struct net_device *ndev, unsigned int txqueue) { struct fec_enet_private *fep = netdev_priv(ndev); diff --git a/drivers/net/ethernet/freescale/fec_mpc52xx.c b/drivers/net/ethernet/freescale/fec_mpc52xx.c index 30cdb246d020..de5278485062 100644 --- a/drivers/net/ethernet/freescale/fec_mpc52xx.c +++ b/drivers/net/ethernet/freescale/fec_mpc52xx.c @@ -84,7 +84,7 @@ static int debug = -1; /* the above default */ module_param(debug, int, 0); MODULE_PARM_DESC(debug, "debugging messages level"); -static void mpc52xx_fec_tx_timeout(struct net_device *dev) +static void mpc52xx_fec_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mpc52xx_fec_priv *priv = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c index 3981c06f082f..80903cd58468 100644 --- a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c +++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c @@ -641,7 +641,7 @@ static void fs_timeout_work(struct work_struct *work) netif_wake_queue(dev); } -static void fs_timeout(struct net_device *dev) +static void fs_timeout(struct net_device *dev, unsigned int txqueue) { struct fs_enet_private *fep = netdev_priv(dev); diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c index 72868a28b621..b636d83a7ee9 100644 --- a/drivers/net/ethernet/freescale/gianfar.c +++ b/drivers/net/ethernet/freescale/gianfar.c @@ -2093,7 +2093,7 @@ static void gfar_reset_task(struct work_struct *work) reset_gfar(priv->ndev); } -static void gfar_timeout(struct net_device *dev) +static void gfar_timeout(struct net_device *dev, unsigned int txqueue) { struct gfar_private *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index f839fa94ebdd..0d101c00286f 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -3545,7 +3545,7 @@ static void ucc_geth_timeout_work(struct work_struct *work) * ucc_geth_timeout gets called when a packet has not been * transmitted after a set amount of time. */ -static void ucc_geth_timeout(struct net_device *dev) +static void ucc_geth_timeout(struct net_device *dev, unsigned int txqueue) { struct ucc_geth_private *ugeth = netdev_priv(dev); diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c index 1eca0fdb9933..a7b7a4aace79 100644 --- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c +++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c @@ -93,7 +93,7 @@ static irqreturn_t fjn_interrupt(int irq, void *dev_id); static void fjn_rx(struct net_device *dev); static void fjn_reset(struct net_device *dev); static void set_rx_mode(struct net_device *dev); -static void fjn_tx_timeout(struct net_device *dev); +static void fjn_tx_timeout(struct net_device *dev, unsigned int txqueue); static const struct ethtool_ops netdev_ethtool_ops; /* @@ -774,7 +774,7 @@ static irqreturn_t fjn_interrupt(int dummy, void *dev_id) /*====================================================================*/ -static void fjn_tx_timeout(struct net_device *dev) +static void fjn_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct local_info *lp = netdev_priv(dev); unsigned int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 9b7a8db9860f..e032563ceefd 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -845,7 +845,7 @@ static void gve_turnup(struct gve_priv *priv) gve_set_napi_enabled(priv); } -static void gve_tx_timeout(struct net_device *dev) +static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct gve_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c index 3e9b6d543c77..dc8dd5fc1559 100644 --- a/drivers/net/ethernet/hisilicon/hip04_eth.c +++ b/drivers/net/ethernet/hisilicon/hip04_eth.c @@ -779,7 +779,7 @@ static int hip04_mac_stop(struct net_device *ndev) return 0; } -static void hip04_timeout(struct net_device *ndev) +static void hip04_timeout(struct net_device *ndev, unsigned int txqueue) { struct hip04_priv *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c index 247de9105d10..4fb776920a93 100644 --- a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c +++ b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c @@ -893,7 +893,7 @@ static void hix5hd2_tx_timeout_task(struct work_struct *work) hix5hd2_net_open(priv->netdev); } -static void hix5hd2_net_timeout(struct net_device *dev) +static void hix5hd2_net_timeout(struct net_device *dev, unsigned int txqueue) { struct hix5hd2_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c index 14ab20491fd0..e45553ec114a 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c @@ -1485,7 +1485,7 @@ static int hns_nic_net_stop(struct net_device *ndev) static void hns_tx_timeout_reset(struct hns_nic_priv *priv); #define HNS_TX_TIMEO_LIMIT (40 * HZ) -static void hns_nic_net_timeout(struct net_device *ndev) +static void hns_nic_net_timeout(struct net_device *ndev, unsigned int txqueue) { struct hns_nic_priv *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 69545dd6c938..b113b895dab7 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1869,7 +1869,7 @@ static bool hns3_get_tx_timeo_queue_info(struct net_device *ndev) return true; } -static void hns3_nic_net_timeout(struct net_device *ndev) +static void hns3_nic_net_timeout(struct net_device *ndev, unsigned int txqueue) { struct hns3_nic_priv *priv = netdev_priv(ndev); struct hnae3_handle *h = priv->ae_handle; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c index 2411ad270c98..02a14f5e7fe3 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_main.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c @@ -766,7 +766,7 @@ static void hinic_set_rx_mode(struct net_device *netdev) queue_work(nic_dev->workq, &rx_mode_work->work); } -static void hinic_tx_timeout(struct net_device *netdev) +static void hinic_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct hinic_dev *nic_dev = netdev_priv(netdev); diff --git a/drivers/net/ethernet/i825xx/82596.c b/drivers/net/ethernet/i825xx/82596.c index 92929750f832..bef676d93339 100644 --- a/drivers/net/ethernet/i825xx/82596.c +++ b/drivers/net/ethernet/i825xx/82596.c @@ -363,7 +363,7 @@ static netdev_tx_t i596_start_xmit(struct sk_buff *skb, struct net_device *dev); static irqreturn_t i596_interrupt(int irq, void *dev_id); static int i596_close(struct net_device *dev); static void i596_add_cmd(struct net_device *dev, struct i596_cmd *cmd); -static void i596_tx_timeout (struct net_device *dev); +static void i596_tx_timeout (struct net_device *dev, unsigned int txqueue); static void print_eth(unsigned char *buf, char *str); static void set_multicast_list(struct net_device *dev); @@ -1019,7 +1019,7 @@ err_irq_dev: return res; } -static void i596_tx_timeout (struct net_device *dev) +static void i596_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct i596_private *lp = dev->ml_priv; int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/i825xx/ether1.c b/drivers/net/ethernet/i825xx/ether1.c index bb3b8adbe4f0..a0bfb509e002 100644 --- a/drivers/net/ethernet/i825xx/ether1.c +++ b/drivers/net/ethernet/i825xx/ether1.c @@ -66,7 +66,7 @@ static netdev_tx_t ether1_sendpacket(struct sk_buff *skb, static irqreturn_t ether1_interrupt(int irq, void *dev_id); static int ether1_close(struct net_device *dev); static void ether1_setmulticastlist(struct net_device *dev); -static void ether1_timeout(struct net_device *dev); +static void ether1_timeout(struct net_device *dev, unsigned int txqueue); /* ------------------------------------------------------------------------- */ @@ -650,7 +650,7 @@ ether1_open (struct net_device *dev) } static void -ether1_timeout(struct net_device *dev) +ether1_timeout(struct net_device *dev, unsigned int txqueue) { printk(KERN_WARNING "%s: transmit timeout, network cable problem?\n", dev->name); diff --git a/drivers/net/ethernet/i825xx/lib82596.c b/drivers/net/ethernet/i825xx/lib82596.c index f9742af7f142..b03757e169e4 100644 --- a/drivers/net/ethernet/i825xx/lib82596.c +++ b/drivers/net/ethernet/i825xx/lib82596.c @@ -351,7 +351,7 @@ static netdev_tx_t i596_start_xmit(struct sk_buff *skb, struct net_device *dev); static irqreturn_t i596_interrupt(int irq, void *dev_id); static int i596_close(struct net_device *dev); static void i596_add_cmd(struct net_device *dev, struct i596_cmd *cmd); -static void i596_tx_timeout (struct net_device *dev); +static void i596_tx_timeout (struct net_device *dev, unsigned int txqueue); static void print_eth(unsigned char *buf, char *str); static void set_multicast_list(struct net_device *dev); static inline void ca(struct net_device *dev); @@ -936,7 +936,7 @@ out_remove_rx_bufs: return -EAGAIN; } -static void i596_tx_timeout (struct net_device *dev) +static void i596_tx_timeout (struct net_device *dev, unsigned int txqueue) { struct i596_private *lp = netdev_priv(dev); diff --git a/drivers/net/ethernet/i825xx/sun3_82586.c b/drivers/net/ethernet/i825xx/sun3_82586.c index 1a86184d44c0..4564ee02c95f 100644 --- a/drivers/net/ethernet/i825xx/sun3_82586.c +++ b/drivers/net/ethernet/i825xx/sun3_82586.c @@ -125,7 +125,7 @@ static netdev_tx_t sun3_82586_send_packet(struct sk_buff *, struct net_device *); static struct net_device_stats *sun3_82586_get_stats(struct net_device *dev); static void set_multicast_list(struct net_device *dev); -static void sun3_82586_timeout(struct net_device *dev); +static void sun3_82586_timeout(struct net_device *dev, unsigned int txqueue); #if 0 static void sun3_82586_dump(struct net_device *,void *); #endif @@ -965,7 +965,7 @@ static void startrecv586(struct net_device *dev) WAIT_4_SCB_CMD_RUC(); /* wait for accept cmd. (no timeout!!) */ } -static void sun3_82586_timeout(struct net_device *dev) +static void sun3_82586_timeout(struct net_device *dev, unsigned int txqueue) { struct priv *p = netdev_priv(dev); #ifndef NO_NOPCOMMANDS diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c index 13e30eba5349..0273fb7a9d01 100644 --- a/drivers/net/ethernet/ibm/ehea/ehea_main.c +++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c @@ -2786,7 +2786,7 @@ out: return; } -static void ehea_tx_watchdog(struct net_device *dev) +static void ehea_tx_watchdog(struct net_device *dev, unsigned int txqueue) { struct ehea_port *port = netdev_priv(dev); diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c index 2e40425d8a34..b7fc17756c51 100644 --- a/drivers/net/ethernet/ibm/emac/core.c +++ b/drivers/net/ethernet/ibm/emac/core.c @@ -776,7 +776,7 @@ static void emac_reset_work(struct work_struct *work) mutex_unlock(&dev->link_lock); } -static void emac_tx_timeout(struct net_device *ndev) +static void emac_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct emac_instance *dev = netdev_priv(ndev); diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index c90080781924..94b9d8913b66 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2282,7 +2282,7 @@ err: return -ret; } -static void ibmvnic_tx_timeout(struct net_device *dev) +static void ibmvnic_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ibmvnic_adapter *adapter = netdev_priv(dev); diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c index a65d5a9ba7db..1b8d015ebfb0 100644 --- a/drivers/net/ethernet/intel/e100.c +++ b/drivers/net/ethernet/intel/e100.c @@ -2316,7 +2316,7 @@ static void e100_down(struct nic *nic) e100_rx_clean_list(nic); } -static void e100_tx_timeout(struct net_device *netdev) +static void e100_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct nic *nic = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c index aca97b084003..2bced34c19ba 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_main.c +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c @@ -134,7 +134,7 @@ static int e1000_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd); static void e1000_enter_82542_rst(struct e1000_adapter *adapter); static void e1000_leave_82542_rst(struct e1000_adapter *adapter); -static void e1000_tx_timeout(struct net_device *dev); +static void e1000_tx_timeout(struct net_device *dev, unsigned int txqueue); static void e1000_reset_task(struct work_struct *work); static void e1000_smartspeed(struct e1000_adapter *adapter); static int e1000_82547_fifo_workaround(struct e1000_adapter *adapter, @@ -3488,7 +3488,7 @@ exit: * e1000_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void e1000_tx_timeout(struct net_device *netdev) +static void e1000_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct e1000_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index fe7997c18a10..4c220600ea9a 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -5929,7 +5929,7 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, * e1000_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void e1000_tx_timeout(struct net_device *netdev) +static void e1000_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct e1000_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c index 68baee04dc58..ba2566e2123d 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c @@ -697,7 +697,7 @@ static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev) * fm10k_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void fm10k_tx_timeout(struct net_device *netdev) +static void fm10k_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct fm10k_intfc *interface = netdev_priv(netdev); bool real_tx_hang = false; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 1ccabeafa44c..4c9ac6c80eb8 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -301,7 +301,7 @@ void i40e_service_event_schedule(struct i40e_pf *pf) * device is munged, not just the one netdev port, so go for the full * reset. **/ -static void i40e_tx_timeout(struct net_device *netdev) +static void i40e_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct i40e_netdev_priv *np = netdev_priv(netdev); struct i40e_vsi *vsi = np->vsi; diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 821987da5698..0a8824871618 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -159,7 +159,7 @@ void iavf_schedule_reset(struct iavf_adapter *adapter) * iavf_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void iavf_tx_timeout(struct net_device *netdev) +static void iavf_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct iavf_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 69bff085acf7..4d5220c9c721 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5060,7 +5060,7 @@ ice_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh, * ice_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure */ -static void ice_tx_timeout(struct net_device *netdev) +static void ice_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_ring *tx_ring = NULL; diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 98346eb064d5..d11e64a58ed1 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -146,7 +146,7 @@ static int igb_poll(struct napi_struct *, int); static bool igb_clean_tx_irq(struct igb_q_vector *, int); static int igb_clean_rx_irq(struct igb_q_vector *, int); static int igb_ioctl(struct net_device *, struct ifreq *, int cmd); -static void igb_tx_timeout(struct net_device *); +static void igb_tx_timeout(struct net_device *, unsigned int txqueue); static void igb_reset_task(struct work_struct *); static void igb_vlan_mode(struct net_device *netdev, netdev_features_t features); @@ -6184,7 +6184,7 @@ static netdev_tx_t igb_xmit_frame(struct sk_buff *skb, * igb_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void igb_tx_timeout(struct net_device *netdev) +static void igb_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct igb_adapter *adapter = netdev_priv(netdev); struct e1000_hw *hw = &adapter->hw; diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c index 6003dc3ff5fd..5b1800c3ba82 100644 --- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -2375,7 +2375,7 @@ static netdev_tx_t igbvf_xmit_frame(struct sk_buff *skb, * igbvf_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void igbvf_tx_timeout(struct net_device *netdev) +static void igbvf_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct igbvf_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/ixgb/ixgb_main.c b/drivers/net/ethernet/intel/ixgb/ixgb_main.c index 3d8c051dd327..b64e91ea3465 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb_main.c +++ b/drivers/net/ethernet/intel/ixgb/ixgb_main.c @@ -70,7 +70,7 @@ static int ixgb_clean(struct napi_struct *, int); static bool ixgb_clean_rx_irq(struct ixgb_adapter *, int *, int); static void ixgb_alloc_rx_buffers(struct ixgb_adapter *, int); -static void ixgb_tx_timeout(struct net_device *dev); +static void ixgb_tx_timeout(struct net_device *dev, unsigned int txqueue); static void ixgb_tx_timeout_task(struct work_struct *work); static void ixgb_vlan_strip_enable(struct ixgb_adapter *adapter); @@ -1538,7 +1538,7 @@ ixgb_xmit_frame(struct sk_buff *skb, struct net_device *netdev) **/ static void -ixgb_tx_timeout(struct net_device *netdev) +ixgb_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ixgb_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c index 171cdc552961..5b1cf49df3d3 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.c @@ -166,7 +166,9 @@ static ssize_t ixgbe_dbg_netdev_ops_write(struct file *filp, ixgbe_dbg_netdev_ops_buf[len] = '\0'; if (strncmp(ixgbe_dbg_netdev_ops_buf, "tx_timeout", 10) == 0) { - adapter->netdev->netdev_ops->ndo_tx_timeout(adapter->netdev); + /* TX Queue number below is wrong, but ixgbe does not use it */ + adapter->netdev->netdev_ops->ndo_tx_timeout(adapter->netdev, + UINT_MAX); e_dev_info("tx_timeout called\n"); } else { e_dev_info("Unknown command: %s\n", ixgbe_dbg_netdev_ops_buf); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 25c097cd8100..8129ea2e94a8 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6158,7 +6158,7 @@ static void ixgbe_set_eee_capable(struct ixgbe_adapter *adapter) * ixgbe_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void ixgbe_tx_timeout(struct net_device *netdev) +static void ixgbe_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ixgbe_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 076f2da36f27..fa286694ac2c 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -250,7 +250,7 @@ static void ixgbevf_tx_timeout_reset(struct ixgbevf_adapter *adapter) * ixgbevf_tx_timeout - Respond to a Tx Hang * @netdev: network interface device structure **/ -static void ixgbevf_tx_timeout(struct net_device *netdev) +static void ixgbevf_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ixgbevf_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c index 25aa400e2e3c..2e4975572e9f 100644 --- a/drivers/net/ethernet/jme.c +++ b/drivers/net/ethernet/jme.c @@ -2337,7 +2337,7 @@ jme_change_mtu(struct net_device *netdev, int new_mtu) } static void -jme_tx_timeout(struct net_device *netdev) +jme_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct jme_adapter *jme = netdev_priv(netdev); diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c index ae195f8adff5..f98d9d627c71 100644 --- a/drivers/net/ethernet/korina.c +++ b/drivers/net/ethernet/korina.c @@ -917,7 +917,7 @@ static void korina_restart_task(struct work_struct *work) enable_irq(lp->rx_irq); } -static void korina_tx_timeout(struct net_device *dev) +static void korina_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct korina_private *lp = netdev_priv(dev); diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c index 6e73ffe6f928..028e3e6222e9 100644 --- a/drivers/net/ethernet/lantiq_etop.c +++ b/drivers/net/ethernet/lantiq_etop.c @@ -594,7 +594,7 @@ err_hw: } static void -ltq_etop_tx_timeout(struct net_device *dev) +ltq_etop_tx_timeout(struct net_device *dev, unsigned int txqueue) { int err; diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c index d5b644131cff..c43820597218 100644 --- a/drivers/net/ethernet/marvell/mv643xx_eth.c +++ b/drivers/net/ethernet/marvell/mv643xx_eth.c @@ -2590,7 +2590,7 @@ static void tx_timeout_task(struct work_struct *ugly) } } -static void mv643xx_eth_tx_timeout(struct net_device *dev) +static void mv643xx_eth_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mv643xx_eth_private *mp = netdev_priv(dev); diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c index 3fb7ee3d4d13..1a6877902dd6 100644 --- a/drivers/net/ethernet/marvell/pxa168_eth.c +++ b/drivers/net/ethernet/marvell/pxa168_eth.c @@ -742,7 +742,7 @@ txq_reclaim_end: return released; } -static void pxa168_eth_tx_timeout(struct net_device *dev) +static void pxa168_eth_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct pxa168_eth_private *pep = netdev_priv(dev); diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index 095f6c71b4fa..8ca15958e752 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -2884,7 +2884,7 @@ static void skge_tx_clean(struct net_device *dev) skge->tx_ring.to_clean = e; } -static void skge_tx_timeout(struct net_device *dev) +static void skge_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct skge_port *skge = netdev_priv(dev); diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 5f56ee83e3b1..acd1cba987fb 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2358,7 +2358,7 @@ static void sky2_qlink_intr(struct sky2_hw *hw) /* Transmit timeout is only called if we are running, carrier is up * and tx queue is full (stopped). */ -static void sky2_tx_timeout(struct net_device *dev) +static void sky2_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sky2_port *sky2 = netdev_priv(dev); struct sky2_hw *hw = sky2->hw; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 527ad2aadcca..8c6cfd15481c 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -2081,7 +2081,7 @@ static void mtk_dma_free(struct mtk_eth *eth) kfree(eth->scratch_head); } -static void mtk_tx_timeout(struct net_device *dev) +static void mtk_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 7af75b63245f..71c083960a87 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -1363,7 +1363,7 @@ static void mlx4_en_delete_rss_steer_rules(struct mlx4_en_priv *priv) } } -static void mlx4_en_tx_timeout(struct net_device *dev) +static void mlx4_en_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mlx4_en_priv *priv = netdev_priv(dev); struct mlx4_en_dev *mdev = priv->mdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 4980e80a5e85..68f1c8cb302b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4332,7 +4332,7 @@ unlock: rtnl_unlock(); } -static void mlx5e_tx_timeout(struct net_device *dev) +static void mlx5e_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mlx5e_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/micrel/ks8842.c b/drivers/net/ethernet/micrel/ks8842.c index da329ca115cc..f3f6dfe3eddc 100644 --- a/drivers/net/ethernet/micrel/ks8842.c +++ b/drivers/net/ethernet/micrel/ks8842.c @@ -1103,7 +1103,7 @@ static void ks8842_tx_timeout_work(struct work_struct *work) __ks8842_start_new_rx_dma(netdev); } -static void ks8842_tx_timeout(struct net_device *netdev) +static void ks8842_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ks8842_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/micrel/ksz884x.c b/drivers/net/ethernet/micrel/ksz884x.c index e102e1560ac7..d1444ba36e10 100644 --- a/drivers/net/ethernet/micrel/ksz884x.c +++ b/drivers/net/ethernet/micrel/ksz884x.c @@ -4896,7 +4896,7 @@ unlock: * triggered to free up resources so that the transmit routine can continue * sending out packets. The hardware is reset to correct the problem. */ -static void netdev_tx_timeout(struct net_device *dev) +static void netdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { static unsigned long last_reset; diff --git a/drivers/net/ethernet/microchip/enc28j60.c b/drivers/net/ethernet/microchip/enc28j60.c index 0567e4f387a5..09cdc2f2e7ff 100644 --- a/drivers/net/ethernet/microchip/enc28j60.c +++ b/drivers/net/ethernet/microchip/enc28j60.c @@ -1325,7 +1325,7 @@ static irqreturn_t enc28j60_irq(int irq, void *dev_id) return IRQ_HANDLED; } -static void enc28j60_tx_timeout(struct net_device *ndev) +static void enc28j60_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct enc28j60_net *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/microchip/encx24j600.c b/drivers/net/ethernet/microchip/encx24j600.c index 52c41d11f565..39925e4bf2ec 100644 --- a/drivers/net/ethernet/microchip/encx24j600.c +++ b/drivers/net/ethernet/microchip/encx24j600.c @@ -892,7 +892,7 @@ static netdev_tx_t encx24j600_tx(struct sk_buff *skb, struct net_device *dev) } /* Deal with a transmit timeout */ -static void encx24j600_tx_timeout(struct net_device *dev) +static void encx24j600_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct encx24j600_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/natsemi/natsemi.c b/drivers/net/ethernet/natsemi/natsemi.c index 1a2634cbbb69..d21d706b83a7 100644 --- a/drivers/net/ethernet/natsemi/natsemi.c +++ b/drivers/net/ethernet/natsemi/natsemi.c @@ -612,7 +612,7 @@ static void undo_cable_magic(struct net_device *dev); static void check_link(struct net_device *dev); static void netdev_timer(struct timer_list *t); static void dump_ring(struct net_device *dev); -static void ns_tx_timeout(struct net_device *dev); +static void ns_tx_timeout(struct net_device *dev, unsigned int txqueue); static int alloc_ring(struct net_device *dev); static void refill_rx(struct net_device *dev); static void init_ring(struct net_device *dev); @@ -1881,7 +1881,7 @@ static void dump_ring(struct net_device *dev) } } -static void ns_tx_timeout(struct net_device *dev) +static void ns_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct netdev_private *np = netdev_priv(dev); void __iomem * ioaddr = ns_ioaddr(dev); diff --git a/drivers/net/ethernet/natsemi/ns83820.c b/drivers/net/ethernet/natsemi/ns83820.c index 6af9a7eee114..be5f62f06785 100644 --- a/drivers/net/ethernet/natsemi/ns83820.c +++ b/drivers/net/ethernet/natsemi/ns83820.c @@ -1549,7 +1549,7 @@ static int ns83820_stop(struct net_device *ndev) return 0; } -static void ns83820_tx_timeout(struct net_device *ndev) +static void ns83820_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct ns83820 *dev = PRIV(ndev); u32 tx_done_idx; @@ -1603,7 +1603,7 @@ static void ns83820_tx_watch(struct timer_list *t) ndev->name, dev->tx_done_idx, dev->tx_free_idx, atomic_read(&dev->nr_tx_skbs)); - ns83820_tx_timeout(ndev); + ns83820_tx_timeout(ndev, UINT_MAX); } mod_timer(&dev->tx_watchdog, jiffies + 2*HZ); diff --git a/drivers/net/ethernet/natsemi/sonic.c b/drivers/net/ethernet/natsemi/sonic.c index b339125b2f09..fdebc8598b22 100644 --- a/drivers/net/ethernet/natsemi/sonic.c +++ b/drivers/net/ethernet/natsemi/sonic.c @@ -161,7 +161,7 @@ static int sonic_close(struct net_device *dev) return 0; } -static void sonic_tx_timeout(struct net_device *dev) +static void sonic_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sonic_local *lp = netdev_priv(dev); int i; diff --git a/drivers/net/ethernet/natsemi/sonic.h b/drivers/net/ethernet/natsemi/sonic.h index 2b27f7049acb..f1544481aac1 100644 --- a/drivers/net/ethernet/natsemi/sonic.h +++ b/drivers/net/ethernet/natsemi/sonic.h @@ -336,7 +336,7 @@ static int sonic_close(struct net_device *dev); static struct net_device_stats *sonic_get_stats(struct net_device *dev); static void sonic_multicast_list(struct net_device *dev); static int sonic_init(struct net_device *dev); -static void sonic_tx_timeout(struct net_device *dev); +static void sonic_tx_timeout(struct net_device *dev, unsigned int txqueue); static void sonic_msg_init(struct net_device *dev); /* Internal inlines for reading/writing DMA buffers. Note that bus diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c index e0b2bf327905..0ec6b8e8b549 100644 --- a/drivers/net/ethernet/neterion/s2io.c +++ b/drivers/net/ethernet/neterion/s2io.c @@ -7238,7 +7238,7 @@ out_unlock: * void */ -static void s2io_tx_watchdog(struct net_device *dev) +static void s2io_tx_watchdog(struct net_device *dev, unsigned int txqueue) { struct s2io_nic *sp = netdev_priv(dev); struct swStat *swstats = &sp->mac_control.stats_info->sw_stat; diff --git a/drivers/net/ethernet/neterion/s2io.h b/drivers/net/ethernet/neterion/s2io.h index 0a921f30f98f..6fa3159a977f 100644 --- a/drivers/net/ethernet/neterion/s2io.h +++ b/drivers/net/ethernet/neterion/s2io.h @@ -1065,7 +1065,7 @@ static void s2io_txpic_intr_handle(struct s2io_nic *sp); static void tx_intr_handler(struct fifo_info *fifo_data); static void s2io_handle_errors(void * dev_id); -static void s2io_tx_watchdog(struct net_device *dev); +static void s2io_tx_watchdog(struct net_device *dev, unsigned int txqueue); static void s2io_set_multicast(struct net_device *dev); static int rx_osm_handler(struct ring_info *ring_data, struct RxD_t * rxdp); static void s2io_link(struct s2io_nic * sp, int link); diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c index 1d334f2e0a56..9b63574b6202 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-main.c +++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c @@ -3273,7 +3273,7 @@ static int vxge_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) * This function is triggered if the Tx Queue is stopped * for a pre-defined amount of time when the Interface is still up. */ -static void vxge_tx_watchdog(struct net_device *dev) +static void vxge_tx_watchdog(struct net_device *dev, unsigned int txqueue) { struct vxgedev *vdev; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index bcdcd6de7dea..bd305fc6ed5a 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1321,7 +1321,7 @@ nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) netdev_tx_reset_queue(nd_q); } -static void nfp_net_tx_timeout(struct net_device *netdev) +static void nfp_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct nfp_net *nn = netdev_priv(netdev); int i; diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c index 6b54cb3b681d..2fc10a36afa4 100644 --- a/drivers/net/ethernet/nvidia/forcedeth.c +++ b/drivers/net/ethernet/nvidia/forcedeth.c @@ -2739,7 +2739,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit) * nv_tx_timeout: dev->tx_timeout function * Called with netif_tx_lock held. */ -static void nv_tx_timeout(struct net_device *dev) +static void nv_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct fe_priv *np = netdev_priv(dev); u8 __iomem *base = get_hwbase(dev); diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c index 18e6d87c607b..73ec195fbc30 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c @@ -2271,7 +2271,7 @@ static int pch_gbe_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) * pch_gbe_tx_timeout - Respond to a Tx Hang * @netdev: Network interface device structure */ -static void pch_gbe_tx_timeout(struct net_device *netdev) +static void pch_gbe_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct pch_gbe_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/packetengines/hamachi.c b/drivers/net/ethernet/packetengines/hamachi.c index eee883a2aa8d..70816d2e2990 100644 --- a/drivers/net/ethernet/packetengines/hamachi.c +++ b/drivers/net/ethernet/packetengines/hamachi.c @@ -548,7 +548,7 @@ static void mdio_write(struct net_device *dev, int phy_id, int location, int val static int hamachi_open(struct net_device *dev); static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static void hamachi_timer(struct timer_list *t); -static void hamachi_tx_timeout(struct net_device *dev); +static void hamachi_tx_timeout(struct net_device *dev, unsigned int txqueue); static void hamachi_init_ring(struct net_device *dev); static netdev_tx_t hamachi_start_xmit(struct sk_buff *skb, struct net_device *dev); @@ -1042,7 +1042,7 @@ static void hamachi_timer(struct timer_list *t) add_timer(&hmp->timer); } -static void hamachi_tx_timeout(struct net_device *dev) +static void hamachi_tx_timeout(struct net_device *dev, unsigned int txqueue) { int i; struct hamachi_private *hmp = netdev_priv(dev); diff --git a/drivers/net/ethernet/packetengines/yellowfin.c b/drivers/net/ethernet/packetengines/yellowfin.c index 5113ee647090..520779f05e1a 100644 --- a/drivers/net/ethernet/packetengines/yellowfin.c +++ b/drivers/net/ethernet/packetengines/yellowfin.c @@ -344,7 +344,7 @@ static void mdio_write(void __iomem *ioaddr, int phy_id, int location, int value static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static int yellowfin_open(struct net_device *dev); static void yellowfin_timer(struct timer_list *t); -static void yellowfin_tx_timeout(struct net_device *dev); +static void yellowfin_tx_timeout(struct net_device *dev, unsigned int txqueue); static int yellowfin_init_ring(struct net_device *dev); static netdev_tx_t yellowfin_start_xmit(struct sk_buff *skb, struct net_device *dev); @@ -677,7 +677,7 @@ static void yellowfin_timer(struct timer_list *t) add_timer(&yp->timer); } -static void yellowfin_tx_timeout(struct net_device *dev) +static void yellowfin_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct yellowfin_private *yp = netdev_priv(dev); void __iomem *ioaddr = yp->base; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c index ef8258713369..a76108a9c7db 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -1285,7 +1285,7 @@ static void ionic_tx_timeout_work(struct work_struct *ws) rtnl_unlock(); } -static void ionic_tx_timeout(struct net_device *netdev) +static void ionic_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct ionic_lif *lif = netdev_priv(netdev); diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c index c692a41e4548..8067ea04d455 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c @@ -49,7 +49,7 @@ static int netxen_nic_open(struct net_device *netdev); static int netxen_nic_close(struct net_device *netdev); static netdev_tx_t netxen_nic_xmit_frame(struct sk_buff *, struct net_device *); -static void netxen_tx_timeout(struct net_device *netdev); +static void netxen_tx_timeout(struct net_device *netdev, unsigned int txqueue); static void netxen_tx_timeout_task(struct work_struct *work); static void netxen_fw_poll_work(struct work_struct *work); static void netxen_schedule_work(struct netxen_adapter *adapter, @@ -2222,7 +2222,7 @@ static void netxen_nic_handle_phy_intr(struct netxen_adapter *adapter) netxen_advert_link_change(adapter, linkup); } -static void netxen_tx_timeout(struct net_device *netdev) +static void netxen_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct netxen_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c index b4b8ba00ee01..bb864765c761 100644 --- a/drivers/net/ethernet/qlogic/qla3xxx.c +++ b/drivers/net/ethernet/qlogic/qla3xxx.c @@ -3602,7 +3602,7 @@ static int ql3xxx_set_mac_address(struct net_device *ndev, void *p) return 0; } -static void ql3xxx_tx_timeout(struct net_device *ndev) +static void ql3xxx_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct ql3_adapter *qdev = netdev_priv(ndev); diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index c07438db30ba..9dd6cb36f366 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c @@ -56,7 +56,7 @@ static int qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent); static void qlcnic_remove(struct pci_dev *pdev); static int qlcnic_open(struct net_device *netdev); static int qlcnic_close(struct net_device *netdev); -static void qlcnic_tx_timeout(struct net_device *netdev); +static void qlcnic_tx_timeout(struct net_device *netdev, unsigned int txqueue); static void qlcnic_attach_work(struct work_struct *work); static void qlcnic_fwinit_work(struct work_struct *work); @@ -3068,7 +3068,7 @@ static void qlcnic_dump_rings(struct qlcnic_adapter *adapter) } -static void qlcnic_tx_timeout(struct net_device *netdev) +static void qlcnic_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct qlcnic_adapter *adapter = netdev_priv(netdev); diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c index 98f92268cbaa..522fad4cb2cd 100644 --- a/drivers/net/ethernet/qualcomm/emac/emac.c +++ b/drivers/net/ethernet/qualcomm/emac/emac.c @@ -282,7 +282,7 @@ static int emac_close(struct net_device *netdev) } /* Respond to a TX hang */ -static void emac_tx_timeout(struct net_device *netdev) +static void emac_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct emac_adapter *adpt = netdev_priv(netdev); diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c index baac016f3ec0..5a3b65a6eb4f 100644 --- a/drivers/net/ethernet/qualcomm/qca_spi.c +++ b/drivers/net/ethernet/qualcomm/qca_spi.c @@ -785,7 +785,7 @@ qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev) } static void -qcaspi_netdev_tx_timeout(struct net_device *dev) +qcaspi_netdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct qcaspi *qca = netdev_priv(dev); diff --git a/drivers/net/ethernet/qualcomm/qca_uart.c b/drivers/net/ethernet/qualcomm/qca_uart.c index 0981068504fa..375a844cd27c 100644 --- a/drivers/net/ethernet/qualcomm/qca_uart.c +++ b/drivers/net/ethernet/qualcomm/qca_uart.c @@ -248,7 +248,7 @@ out: return NETDEV_TX_OK; } -static void qcauart_netdev_tx_timeout(struct net_device *dev) +static void qcauart_netdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct qcauart *qca = netdev_priv(dev); diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c index 274e5b4bc4ac..c23cb61bbd30 100644 --- a/drivers/net/ethernet/rdc/r6040.c +++ b/drivers/net/ethernet/rdc/r6040.c @@ -410,7 +410,7 @@ static void r6040_init_mac_regs(struct net_device *dev) iowrite16(TM2TX, ioaddr + MTPR); } -static void r6040_tx_timeout(struct net_device *dev) +static void r6040_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct r6040_private *priv = netdev_priv(dev); void __iomem *ioaddr = priv->base; diff --git a/drivers/net/ethernet/realtek/8139cp.c b/drivers/net/ethernet/realtek/8139cp.c index 4f910c4f67b0..60d342f82fb3 100644 --- a/drivers/net/ethernet/realtek/8139cp.c +++ b/drivers/net/ethernet/realtek/8139cp.c @@ -1235,7 +1235,7 @@ static int cp_close (struct net_device *dev) return 0; } -static void cp_tx_timeout(struct net_device *dev) +static void cp_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct cp_private *cp = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/realtek/8139too.c b/drivers/net/ethernet/realtek/8139too.c index 55d01266e615..5caeb8368eab 100644 --- a/drivers/net/ethernet/realtek/8139too.c +++ b/drivers/net/ethernet/realtek/8139too.c @@ -642,7 +642,7 @@ static int mdio_read (struct net_device *dev, int phy_id, int location); static void mdio_write (struct net_device *dev, int phy_id, int location, int val); static void rtl8139_start_thread(struct rtl8139_private *tp); -static void rtl8139_tx_timeout (struct net_device *dev); +static void rtl8139_tx_timeout (struct net_device *dev, unsigned int txqueue); static void rtl8139_init_ring (struct net_device *dev); static netdev_tx_t rtl8139_start_xmit (struct sk_buff *skb, struct net_device *dev); @@ -1700,7 +1700,7 @@ static void rtl8139_tx_timeout_task (struct work_struct *work) spin_unlock_bh(&tp->rx_lock); } -static void rtl8139_tx_timeout (struct net_device *dev) +static void rtl8139_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct rtl8139_private *tp = netdev_priv(dev); diff --git a/drivers/net/ethernet/realtek/atp.c b/drivers/net/ethernet/realtek/atp.c index 58e0ca9093d3..9e3b35c97e63 100644 --- a/drivers/net/ethernet/realtek/atp.c +++ b/drivers/net/ethernet/realtek/atp.c @@ -204,7 +204,7 @@ static void net_rx(struct net_device *dev); static void read_block(long ioaddr, int length, unsigned char *buffer, int data_mode); static int net_close(struct net_device *dev); static void set_rx_mode(struct net_device *dev); -static void tx_timeout(struct net_device *dev); +static void tx_timeout(struct net_device *dev, unsigned int txqueue); /* A list of all installed ATP devices, for removing the driver module. */ @@ -533,7 +533,7 @@ static void write_packet(long ioaddr, int length, unsigned char *packet, int pad outb(Ctrl_HNibWrite | Ctrl_SelData | Ctrl_IRQEN, ioaddr + PAR_CONTROL); } -static void tx_timeout(struct net_device *dev) +static void tx_timeout(struct net_device *dev, unsigned int txqueue) { long ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 67a4d5d45e3a..2cfaa2270319 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -5435,7 +5435,7 @@ static void rtl_reset_work(struct rtl8169_private *tp) netif_wake_queue(dev); } -static void rtl8169_tx_timeout(struct net_device *dev) +static void rtl8169_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct rtl8169_private *tp = netdev_priv(dev); diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 4b13a184bfc7..067ad25553b9 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -1425,7 +1425,7 @@ out_napi_off: } /* Timeout function for Ethernet AVB */ -static void ravb_tx_timeout(struct net_device *ndev) +static void ravb_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct ravb_private *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index e19b49c4013e..cdd8ab2eb910 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -2478,7 +2478,7 @@ out_napi_off: } /* Timeout function */ -static void sh_eth_tx_timeout(struct net_device *ndev) +static void sh_eth_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct sh_eth_private *mdp = netdev_priv(ndev); struct sh_eth_rxdesc *rxdesc; diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c index c56fcbb37066..cd6e0de48248 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c @@ -1572,7 +1572,7 @@ static int sxgbe_poll(struct napi_struct *napi, int budget) * netdev structure and arrange for the device to be reset to a sane state * in order to transmit a new packet. */ -static void sxgbe_tx_timeout(struct net_device *dev) +static void sxgbe_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sxgbe_priv_data *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/seeq/ether3.c b/drivers/net/ethernet/seeq/ether3.c index 632a7c85964d..128ee7cda1ed 100644 --- a/drivers/net/ethernet/seeq/ether3.c +++ b/drivers/net/ethernet/seeq/ether3.c @@ -79,7 +79,7 @@ static netdev_tx_t ether3_sendpacket(struct sk_buff *skb, static irqreturn_t ether3_interrupt (int irq, void *dev_id); static int ether3_close (struct net_device *dev); static void ether3_setmulticastlist (struct net_device *dev); -static void ether3_timeout(struct net_device *dev); +static void ether3_timeout(struct net_device *dev, unsigned int txqueue); #define BUS_16 2 #define BUS_8 1 @@ -450,7 +450,7 @@ static void ether3_setmulticastlist(struct net_device *dev) ether3_outw(priv(dev)->regs.config1 | CFG1_LOCBUFMEM, REG_CONFIG1); } -static void ether3_timeout(struct net_device *dev) +static void ether3_timeout(struct net_device *dev, unsigned int txqueue) { unsigned long flags; diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c index 276c7cae7cee..8507ff242014 100644 --- a/drivers/net/ethernet/seeq/sgiseeq.c +++ b/drivers/net/ethernet/seeq/sgiseeq.c @@ -645,7 +645,7 @@ sgiseeq_start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -static void timeout(struct net_device *dev) +static void timeout(struct net_device *dev, unsigned int txqueue) { printk(KERN_NOTICE "%s: transmit timed out, resetting\n", dev->name); sgiseeq_reset(dev); diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index 992c773620ec..f358709fab67 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -2395,7 +2395,7 @@ static void efx_net_stats(struct net_device *net_dev, } /* Context: netif_tx_lock held, BHs disabled. */ -static void efx_watchdog(struct net_device *net_dev) +static void efx_watchdog(struct net_device *net_dev, unsigned int txqueue) { struct efx_nic *efx = netdev_priv(net_dev); diff --git a/drivers/net/ethernet/sfc/falcon/efx.c b/drivers/net/ethernet/sfc/falcon/efx.c index eecc348b1c32..bee4cd9d7135 100644 --- a/drivers/net/ethernet/sfc/falcon/efx.c +++ b/drivers/net/ethernet/sfc/falcon/efx.c @@ -2108,7 +2108,7 @@ static void ef4_net_stats(struct net_device *net_dev, } /* Context: netif_tx_lock held, BHs disabled. */ -static void ef4_watchdog(struct net_device *net_dev) +static void ef4_watchdog(struct net_device *net_dev, unsigned int txqueue) { struct ef4_nic *efx = netdev_priv(net_dev); diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c index d242906ae233..06637b03deed 100644 --- a/drivers/net/ethernet/sgi/ioc3-eth.c +++ b/drivers/net/ethernet/sgi/ioc3-eth.c @@ -114,7 +114,7 @@ struct ioc3_private { static int ioc3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static void ioc3_set_multicast_list(struct net_device *dev); static netdev_tx_t ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev); -static void ioc3_timeout(struct net_device *dev); +static void ioc3_timeout(struct net_device *dev, unsigned int txqueue); static inline unsigned int ioc3_hash(const unsigned char *addr); static void ioc3_start(struct ioc3_private *ip); static inline void ioc3_stop(struct ioc3_private *ip); @@ -1479,7 +1479,7 @@ drop_packet: return NETDEV_TX_OK; } -static void ioc3_timeout(struct net_device *dev) +static void ioc3_timeout(struct net_device *dev, unsigned int txqueue) { struct ioc3_private *ip = netdev_priv(dev); diff --git a/drivers/net/ethernet/sgi/meth.c b/drivers/net/ethernet/sgi/meth.c index 539bc5db989c..0c396ecd3389 100644 --- a/drivers/net/ethernet/sgi/meth.c +++ b/drivers/net/ethernet/sgi/meth.c @@ -90,7 +90,7 @@ struct meth_private { spinlock_t meth_lock; }; -static void meth_tx_timeout(struct net_device *dev); +static void meth_tx_timeout(struct net_device *dev, unsigned int txqueue); static irqreturn_t meth_interrupt(int irq, void *dev_id); /* global, initialized in ip32-setup.c */ @@ -727,7 +727,7 @@ static netdev_tx_t meth_tx(struct sk_buff *skb, struct net_device *dev) /* * Deal with a transmit timeout. */ -static void meth_tx_timeout(struct net_device *dev) +static void meth_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct meth_private *priv = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/silan/sc92031.c b/drivers/net/ethernet/silan/sc92031.c index c7641a236eb8..cb043eb1bdc1 100644 --- a/drivers/net/ethernet/silan/sc92031.c +++ b/drivers/net/ethernet/silan/sc92031.c @@ -1078,7 +1078,7 @@ static void sc92031_set_multicast_list(struct net_device *dev) spin_unlock_bh(&priv->lock); } -static void sc92031_tx_timeout(struct net_device *dev) +static void sc92031_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sc92031_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/sis/sis190.c b/drivers/net/ethernet/sis/sis190.c index 5b351beb78cb..5a4b6e3ab38f 100644 --- a/drivers/net/ethernet/sis/sis190.c +++ b/drivers/net/ethernet/sis/sis190.c @@ -1538,7 +1538,7 @@ err_out_0: goto out; } -static void sis190_tx_timeout(struct net_device *dev) +static void sis190_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sis190_private *tp = netdev_priv(dev); void __iomem *ioaddr = tp->mmio_addr; diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c index 85eaccbbbac1..81ed7589e33c 100644 --- a/drivers/net/ethernet/sis/sis900.c +++ b/drivers/net/ethernet/sis/sis900.c @@ -222,7 +222,7 @@ static int mdio_read(struct net_device *net_dev, int phy_id, int location); static void mdio_write(struct net_device *net_dev, int phy_id, int location, int val); static void sis900_timer(struct timer_list *t); static void sis900_check_mode (struct net_device *net_dev, struct mii_phy *mii_phy); -static void sis900_tx_timeout(struct net_device *net_dev); +static void sis900_tx_timeout(struct net_device *net_dev, unsigned int txqueue); static void sis900_init_tx_ring(struct net_device *net_dev); static void sis900_init_rx_ring(struct net_device *net_dev); static netdev_tx_t sis900_start_xmit(struct sk_buff *skb, @@ -1537,7 +1537,7 @@ static void sis900_read_mode(struct net_device *net_dev, int *speed, int *duplex * disable interrupts and do some tasks */ -static void sis900_tx_timeout(struct net_device *net_dev) +static void sis900_tx_timeout(struct net_device *net_dev, unsigned int txqueue) { struct sis900_private *sis_priv = netdev_priv(net_dev); void __iomem *ioaddr = sis_priv->ioaddr; diff --git a/drivers/net/ethernet/smsc/epic100.c b/drivers/net/ethernet/smsc/epic100.c index be47d864f8b9..912760e8514c 100644 --- a/drivers/net/ethernet/smsc/epic100.c +++ b/drivers/net/ethernet/smsc/epic100.c @@ -291,7 +291,7 @@ static int mdio_read(struct net_device *dev, int phy_id, int location); static void mdio_write(struct net_device *dev, int phy_id, int loc, int val); static void epic_restart(struct net_device *dev); static void epic_timer(struct timer_list *t); -static void epic_tx_timeout(struct net_device *dev); +static void epic_tx_timeout(struct net_device *dev, unsigned int txqueue); static void epic_init_ring(struct net_device *dev); static netdev_tx_t epic_start_xmit(struct sk_buff *skb, struct net_device *dev); @@ -861,7 +861,7 @@ static void epic_timer(struct timer_list *t) add_timer(&ep->timer); } -static void epic_tx_timeout(struct net_device *dev) +static void epic_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct epic_private *ep = netdev_priv(dev); void __iomem *ioaddr = ep->ioaddr; diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c index 7b65e79d6ae9..186c0bddbe5f 100644 --- a/drivers/net/ethernet/smsc/smc911x.c +++ b/drivers/net/ethernet/smsc/smc911x.c @@ -1245,7 +1245,7 @@ static void smc911x_poll_controller(struct net_device *dev) #endif /* Our watchdog timed out. Called by the networking layer */ -static void smc911x_timeout(struct net_device *dev) +static void smc911x_timeout(struct net_device *dev, unsigned int txqueue) { struct smc911x_local *lp = netdev_priv(dev); int status, mask; diff --git a/drivers/net/ethernet/smsc/smc9194.c b/drivers/net/ethernet/smsc/smc9194.c index d3bb2ba51f40..4b2330deed47 100644 --- a/drivers/net/ethernet/smsc/smc9194.c +++ b/drivers/net/ethernet/smsc/smc9194.c @@ -216,7 +216,7 @@ static int smc_open(struct net_device *dev); /* . Our watchdog timed out. Called by the networking layer */ -static void smc_timeout(struct net_device *dev); +static void smc_timeout(struct net_device *dev, unsigned int txqueue); /* . This is called by the kernel in response to 'ifconfig ethX down'. It @@ -1094,7 +1094,7 @@ static int smc_open(struct net_device *dev) .-------------------------------------------------------- */ -static void smc_timeout(struct net_device *dev) +static void smc_timeout(struct net_device *dev, unsigned int txqueue) { /* If we get here, some higher level has decided we are broken. There should really be a "kick me" function call instead. */ diff --git a/drivers/net/ethernet/smsc/smc91c92_cs.c b/drivers/net/ethernet/smsc/smc91c92_cs.c index a55f430f6a7b..f2a50eb3c1e0 100644 --- a/drivers/net/ethernet/smsc/smc91c92_cs.c +++ b/drivers/net/ethernet/smsc/smc91c92_cs.c @@ -271,7 +271,7 @@ static void smc91c92_release(struct pcmcia_device *link); static int smc_open(struct net_device *dev); static int smc_close(struct net_device *dev); static int smc_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); -static void smc_tx_timeout(struct net_device *dev); +static void smc_tx_timeout(struct net_device *dev, unsigned int txqueue); static netdev_tx_t smc_start_xmit(struct sk_buff *skb, struct net_device *dev); static irqreturn_t smc_interrupt(int irq, void *dev_id); @@ -1178,7 +1178,7 @@ static void smc_hardware_send_packet(struct net_device * dev) /*====================================================================*/ -static void smc_tx_timeout(struct net_device *dev) +static void smc_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct smc_private *smc = netdev_priv(dev); unsigned int ioaddr = dev->base_addr; diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c index 3a6761131f4c..90410f9d3b1a 100644 --- a/drivers/net/ethernet/smsc/smc91x.c +++ b/drivers/net/ethernet/smsc/smc91x.c @@ -1321,7 +1321,7 @@ static void smc_poll_controller(struct net_device *dev) #endif /* Our watchdog timed out. Called by the networking layer */ -static void smc_timeout(struct net_device *dev) +static void smc_timeout(struct net_device *dev, unsigned int txqueue) { struct smc_local *lp = netdev_priv(dev); void __iomem *ioaddr = lp->base; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index bbc65bd332a8..da80866d0371 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3792,7 +3792,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget) * netdev structure and arrange for the device to be reset to a sane state * in order to transmit a new packet. */ -static void stmmac_tx_timeout(struct net_device *dev) +static void stmmac_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct stmmac_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c index c91876f8c536..6ec9163e232c 100644 --- a/drivers/net/ethernet/sun/cassini.c +++ b/drivers/net/ethernet/sun/cassini.c @@ -2666,7 +2666,7 @@ static void cas_netpoll(struct net_device *dev) } #endif -static void cas_tx_timeout(struct net_device *dev) +static void cas_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct cas *cp = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c index f5fd1f3c07cc..9a5004f674c7 100644 --- a/drivers/net/ethernet/sun/niu.c +++ b/drivers/net/ethernet/sun/niu.c @@ -6517,7 +6517,7 @@ static void niu_reset_task(struct work_struct *work) spin_unlock_irqrestore(&np->lock, flags); } -static void niu_tx_timeout(struct net_device *dev) +static void niu_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct niu *np = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/sunbmac.c b/drivers/net/ethernet/sun/sunbmac.c index e9b757b03b56..c5add0b45eed 100644 --- a/drivers/net/ethernet/sun/sunbmac.c +++ b/drivers/net/ethernet/sun/sunbmac.c @@ -941,7 +941,7 @@ static int bigmac_close(struct net_device *dev) return 0; } -static void bigmac_tx_timeout(struct net_device *dev) +static void bigmac_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct bigmac *bp = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c index 3e7631160384..8358064fbd48 100644 --- a/drivers/net/ethernet/sun/sungem.c +++ b/drivers/net/ethernet/sun/sungem.c @@ -970,7 +970,7 @@ static void gem_poll_controller(struct net_device *dev) } #endif -static void gem_tx_timeout(struct net_device *dev) +static void gem_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct gem *gp = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c index d007dfeba5c3..f0fe7bb2a750 100644 --- a/drivers/net/ethernet/sun/sunhme.c +++ b/drivers/net/ethernet/sun/sunhme.c @@ -2246,7 +2246,7 @@ static int happy_meal_close(struct net_device *dev) #define SXD(x) #endif -static void happy_meal_tx_timeout(struct net_device *dev) +static void happy_meal_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct happy_meal *hp = netdev_priv(dev); diff --git a/drivers/net/ethernet/sun/sunqe.c b/drivers/net/ethernet/sun/sunqe.c index 1468fa0a54e9..2102b95ec347 100644 --- a/drivers/net/ethernet/sun/sunqe.c +++ b/drivers/net/ethernet/sun/sunqe.c @@ -544,7 +544,7 @@ static void qe_tx_reclaim(struct sunqe *qep) qep->tx_old = elem; } -static void qe_tx_timeout(struct net_device *dev) +static void qe_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct sunqe *qep = netdev_priv(dev); int tx_full; diff --git a/drivers/net/ethernet/sun/sunvnet_common.c b/drivers/net/ethernet/sun/sunvnet_common.c index 8b94d9ad9e2b..a601a306f9a5 100644 --- a/drivers/net/ethernet/sun/sunvnet_common.c +++ b/drivers/net/ethernet/sun/sunvnet_common.c @@ -1539,7 +1539,7 @@ out_dropped: } EXPORT_SYMBOL_GPL(sunvnet_start_xmit_common); -void sunvnet_tx_timeout_common(struct net_device *dev) +void sunvnet_tx_timeout_common(struct net_device *dev, unsigned int txqueue) { /* XXX Implement me XXX */ } diff --git a/drivers/net/ethernet/sun/sunvnet_common.h b/drivers/net/ethernet/sun/sunvnet_common.h index 2b808d2482d6..5416a3cb9e7d 100644 --- a/drivers/net/ethernet/sun/sunvnet_common.h +++ b/drivers/net/ethernet/sun/sunvnet_common.h @@ -135,7 +135,7 @@ int sunvnet_open_common(struct net_device *dev); int sunvnet_close_common(struct net_device *dev); void sunvnet_set_rx_mode_common(struct net_device *dev, struct vnet *vp); int sunvnet_set_mac_addr_common(struct net_device *dev, void *p); -void sunvnet_tx_timeout_common(struct net_device *dev); +void sunvnet_tx_timeout_common(struct net_device *dev, unsigned int txqueue); netdev_tx_t sunvnet_start_xmit_common(struct sk_buff *skb, struct net_device *dev, struct vnet_port *(*vnet_tx_port) diff --git a/drivers/net/ethernet/synopsys/dwc-xlgmac-net.c b/drivers/net/ethernet/synopsys/dwc-xlgmac-net.c index a1f5a1e61040..07046a2370b3 100644 --- a/drivers/net/ethernet/synopsys/dwc-xlgmac-net.c +++ b/drivers/net/ethernet/synopsys/dwc-xlgmac-net.c @@ -689,7 +689,7 @@ static int xlgmac_close(struct net_device *netdev) return 0; } -static void xlgmac_tx_timeout(struct net_device *netdev) +static void xlgmac_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct xlgmac_pdata *pdata = netdev_priv(netdev); diff --git a/drivers/net/ethernet/ti/cpmac.c b/drivers/net/ethernet/ti/cpmac.c index 3a655a4dc10e..5e1b8292cd3f 100644 --- a/drivers/net/ethernet/ti/cpmac.c +++ b/drivers/net/ethernet/ti/cpmac.c @@ -797,7 +797,7 @@ static irqreturn_t cpmac_irq(int irq, void *dev_id) return IRQ_HANDLED; } -static void cpmac_tx_timeout(struct net_device *dev) +static void cpmac_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct cpmac_priv *priv = netdev_priv(dev); diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c index 707d5eb480ce..97a058ca60ac 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.c +++ b/drivers/net/ethernet/ti/cpsw_priv.c @@ -272,7 +272,7 @@ void soft_reset(const char *module, void __iomem *reg) WARN(readl_relaxed(reg) & 1, "failed to soft-reset %s\n", module); } -void cpsw_ndo_tx_timeout(struct net_device *ndev) +void cpsw_ndo_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index bc726356a72c..b8d7b924ee3d 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -449,7 +449,7 @@ int cpsw_rx_poll(struct napi_struct *napi_rx, int budget); void cpsw_rx_vlan_encap(struct sk_buff *skb); void soft_reset(const char *module, void __iomem *reg); void cpsw_set_slave_mac(struct cpsw_slave *slave, struct cpsw_priv *priv); -void cpsw_ndo_tx_timeout(struct net_device *ndev); +void cpsw_ndo_tx_timeout(struct net_device *ndev, unsigned int txqueue); int cpsw_need_resplit(struct cpsw_common *cpsw); int cpsw_ndo_ioctl(struct net_device *dev, struct ifreq *req, int cmd); int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 rate); diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index ae27be85e363..75d4e16c692b 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -983,7 +983,7 @@ fail_tx: * error and re-initialize the TX channel for hardware operation * */ -static void emac_dev_tx_timeout(struct net_device *ndev) +static void emac_dev_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct emac_priv *priv = netdev_priv(ndev); struct device *emac_dev = &ndev->dev; diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c index 1b2702f74455..432645e86495 100644 --- a/drivers/net/ethernet/ti/netcp_core.c +++ b/drivers/net/ethernet/ti/netcp_core.c @@ -1811,7 +1811,7 @@ out: return (ret == 0) ? 0 : err; } -static void netcp_ndo_tx_timeout(struct net_device *ndev) +static void netcp_ndo_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct netcp_intf *netcp = netdev_priv(ndev); unsigned int descs = knav_pool_count(netcp->tx_pool); diff --git a/drivers/net/ethernet/ti/tlan.c b/drivers/net/ethernet/ti/tlan.c index 78f0f2d59e22..ad465202980a 100644 --- a/drivers/net/ethernet/ti/tlan.c +++ b/drivers/net/ethernet/ti/tlan.c @@ -161,7 +161,7 @@ static void tlan_set_multicast_list(struct net_device *); static int tlan_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); static int tlan_probe1(struct pci_dev *pdev, long ioaddr, int irq, int rev, const struct pci_device_id *ent); -static void tlan_tx_timeout(struct net_device *dev); +static void tlan_tx_timeout(struct net_device *dev, unsigned int txqueue); static void tlan_tx_timeout_work(struct work_struct *work); static int tlan_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); @@ -997,7 +997,7 @@ static int tlan_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) * **************************************************************/ -static void tlan_tx_timeout(struct net_device *dev) +static void tlan_tx_timeout(struct net_device *dev, unsigned int txqueue) { TLAN_DBG(TLAN_DEBUG_GNRL, "%s: Transmit timed out.\n", dev->name); @@ -1028,7 +1028,7 @@ static void tlan_tx_timeout_work(struct work_struct *work) struct tlan_priv *priv = container_of(work, struct tlan_priv, tlan_tqueue); - tlan_tx_timeout(priv->dev); + tlan_tx_timeout(priv->dev, UINT_MAX); } diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.c b/drivers/net/ethernet/toshiba/ps3_gelic_net.c index 9d9f8acb7ee3..070dd6fa9401 100644 --- a/drivers/net/ethernet/toshiba/ps3_gelic_net.c +++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.c @@ -1405,7 +1405,7 @@ out: * * called, if tx hangs. Schedules a task that resets the interface */ -void gelic_net_tx_timeout(struct net_device *netdev) +void gelic_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct gelic_card *card; diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.h b/drivers/net/ethernet/toshiba/ps3_gelic_net.h index 051033580f0a..805903dbddcc 100644 --- a/drivers/net/ethernet/toshiba/ps3_gelic_net.h +++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.h @@ -359,7 +359,7 @@ int gelic_net_open(struct net_device *netdev); int gelic_net_stop(struct net_device *netdev); netdev_tx_t gelic_net_xmit(struct sk_buff *skb, struct net_device *netdev); void gelic_net_set_multi(struct net_device *netdev); -void gelic_net_tx_timeout(struct net_device *netdev); +void gelic_net_tx_timeout(struct net_device *netdev, unsigned int txqueue); int gelic_net_setup_netdev(struct net_device *netdev, struct gelic_card *card); /* shared ethtool ops */ diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c index 538e70810d3d..6576271642c1 100644 --- a/drivers/net/ethernet/toshiba/spider_net.c +++ b/drivers/net/ethernet/toshiba/spider_net.c @@ -2180,7 +2180,7 @@ out: * called, if tx hangs. Schedules a task that resets the interface */ static void -spider_net_tx_timeout(struct net_device *netdev) +spider_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct spider_net_card *card; diff --git a/drivers/net/ethernet/toshiba/tc35815.c b/drivers/net/ethernet/toshiba/tc35815.c index 12466a72cefc..708de826200e 100644 --- a/drivers/net/ethernet/toshiba/tc35815.c +++ b/drivers/net/ethernet/toshiba/tc35815.c @@ -483,7 +483,7 @@ static void tc35815_txdone(struct net_device *dev); static int tc35815_close(struct net_device *dev); static struct net_device_stats *tc35815_get_stats(struct net_device *dev); static void tc35815_set_multicast_list(struct net_device *dev); -static void tc35815_tx_timeout(struct net_device *dev); +static void tc35815_tx_timeout(struct net_device *dev, unsigned int txqueue); static int tc35815_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); #ifdef CONFIG_NET_POLL_CONTROLLER static void tc35815_poll_controller(struct net_device *dev); @@ -1189,7 +1189,7 @@ static void tc35815_schedule_restart(struct net_device *dev) spin_unlock_irqrestore(&lp->lock, flags); } -static void tc35815_tx_timeout(struct net_device *dev) +static void tc35815_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct tc35815_regs __iomem *tr = (struct tc35815_regs __iomem *)dev->base_addr; diff --git a/drivers/net/ethernet/via/via-rhine.c b/drivers/net/ethernet/via/via-rhine.c index ed12dbd156f0..803247d51fe9 100644 --- a/drivers/net/ethernet/via/via-rhine.c +++ b/drivers/net/ethernet/via/via-rhine.c @@ -506,7 +506,7 @@ static void mdio_write(struct net_device *dev, int phy_id, int location, int val static int rhine_open(struct net_device *dev); static void rhine_reset_task(struct work_struct *work); static void rhine_slow_event_task(struct work_struct *work); -static void rhine_tx_timeout(struct net_device *dev); +static void rhine_tx_timeout(struct net_device *dev, unsigned int txqueue); static netdev_tx_t rhine_start_tx(struct sk_buff *skb, struct net_device *dev); static irqreturn_t rhine_interrupt(int irq, void *dev_instance); @@ -1761,7 +1761,7 @@ out_unlock: mutex_unlock(&rp->task_lock); } -static void rhine_tx_timeout(struct net_device *dev) +static void rhine_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct rhine_private *rp = netdev_priv(dev); void __iomem *ioaddr = rp->base; diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c index bede1ff289c5..c0d181a7f83a 100644 --- a/drivers/net/ethernet/wiznet/w5100.c +++ b/drivers/net/ethernet/wiznet/w5100.c @@ -790,7 +790,7 @@ static void w5100_restart_work(struct work_struct *work) w5100_restart(priv->ndev); } -static void w5100_tx_timeout(struct net_device *ndev) +static void w5100_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct w5100_priv *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/wiznet/w5300.c b/drivers/net/ethernet/wiznet/w5300.c index 6ba2747779ce..46aae30c4636 100644 --- a/drivers/net/ethernet/wiznet/w5300.c +++ b/drivers/net/ethernet/wiznet/w5300.c @@ -341,7 +341,7 @@ static void w5300_get_regs(struct net_device *ndev, } } -static void w5300_tx_timeout(struct net_device *ndev) +static void w5300_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct w5300_priv *priv = netdev_priv(ndev); diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c index 0de52e70abcc..0c26f5bcc523 100644 --- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c +++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c @@ -521,7 +521,7 @@ static int xemaclite_set_mac_address(struct net_device *dev, void *address) * * This function is called when Tx time out occurs for Emaclite device. */ -static void xemaclite_tx_timeout(struct net_device *dev) +static void xemaclite_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct net_local *lp = netdev_priv(dev); unsigned long flags; diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c index fd5288ff53b5..480ab7251515 100644 --- a/drivers/net/ethernet/xircom/xirc2ps_cs.c +++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c @@ -288,7 +288,7 @@ struct local_info { */ static netdev_tx_t do_start_xmit(struct sk_buff *skb, struct net_device *dev); -static void xirc_tx_timeout(struct net_device *dev); +static void xirc_tx_timeout(struct net_device *dev, unsigned int txqueue); static void xirc2ps_tx_timeout_task(struct work_struct *work); static void set_addresses(struct net_device *dev); static void set_multicast_list(struct net_device *dev); @@ -1203,7 +1203,7 @@ xirc2ps_tx_timeout_task(struct work_struct *work) } static void -xirc_tx_timeout(struct net_device *dev) +xirc_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct local_info *lp = netdev_priv(dev); dev->stats.tx_errors++; diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c index b517c1af9de0..309a74da2ec3 100644 --- a/drivers/net/fjes/fjes_main.c +++ b/drivers/net/fjes/fjes_main.c @@ -48,7 +48,7 @@ static void fjes_get_stats64(struct net_device *, struct rtnl_link_stats64 *); static int fjes_change_mtu(struct net_device *, int); static int fjes_vlan_rx_add_vid(struct net_device *, __be16 proto, u16); static int fjes_vlan_rx_kill_vid(struct net_device *, __be16 proto, u16); -static void fjes_tx_retry(struct net_device *); +static void fjes_tx_retry(struct net_device *, unsigned int txqueue); static int fjes_acpi_add(struct acpi_device *); static int fjes_acpi_remove(struct acpi_device *); @@ -792,7 +792,7 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev) return ret; } -static void fjes_tx_retry(struct net_device *netdev) +static void fjes_tx_retry(struct net_device *netdev, unsigned int txqueue) { struct netdev_queue *queue = netdev_get_tx_queue(netdev, 0); diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c index 2a91c192659f..317d3a8df316 100644 --- a/drivers/net/slip/slip.c +++ b/drivers/net/slip/slip.c @@ -457,7 +457,7 @@ static void slip_write_wakeup(struct tty_struct *tty) schedule_work(&sl->tx_work); } -static void sl_tx_timeout(struct net_device *dev) +static void sl_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct slip *sl = netdev_priv(dev); diff --git a/drivers/net/usb/catc.c b/drivers/net/usb/catc.c index 1e58702c737f..d387bc7ac1b6 100644 --- a/drivers/net/usb/catc.c +++ b/drivers/net/usb/catc.c @@ -447,7 +447,7 @@ static netdev_tx_t catc_start_xmit(struct sk_buff *skb, return NETDEV_TX_OK; } -static void catc_tx_timeout(struct net_device *netdev) +static void catc_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct catc *catc = netdev_priv(netdev); diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c index ca827802f291..417e42c9fd03 100644 --- a/drivers/net/usb/hso.c +++ b/drivers/net/usb/hso.c @@ -820,7 +820,7 @@ static const struct ethtool_ops ops = { }; /* called when a packet did not ack after watchdogtimeout */ -static void hso_net_tx_timeout(struct net_device *net) +static void hso_net_tx_timeout(struct net_device *net, unsigned int txqueue) { struct hso_net *odev = netdev_priv(net); diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c index 8c01fbf68a89..c792d65dd7b4 100644 --- a/drivers/net/usb/ipheth.c +++ b/drivers/net/usb/ipheth.c @@ -400,7 +400,7 @@ static int ipheth_tx(struct sk_buff *skb, struct net_device *net) return NETDEV_TX_OK; } -static void ipheth_tx_timeout(struct net_device *net) +static void ipheth_tx_timeout(struct net_device *net, unsigned int txqueue) { struct ipheth_device *dev = netdev_priv(net); diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c index 8e210ba4a313..ed01dc964c99 100644 --- a/drivers/net/usb/kaweth.c +++ b/drivers/net/usb/kaweth.c @@ -894,7 +894,7 @@ static void kaweth_async_set_rx_mode(struct kaweth_device *kaweth) /**************************************************************** * kaweth_tx_timeout ****************************************************************/ -static void kaweth_tx_timeout(struct net_device *net) +static void kaweth_tx_timeout(struct net_device *net, unsigned int txqueue) { struct kaweth_device *kaweth = netdev_priv(net); diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c index cf1f3f0a4b9b..0c8b9363366b 100644 --- a/drivers/net/usb/lan78xx.c +++ b/drivers/net/usb/lan78xx.c @@ -3662,7 +3662,7 @@ static void lan78xx_disconnect(struct usb_interface *intf) usb_put_dev(udev); } -static void lan78xx_tx_timeout(struct net_device *net) +static void lan78xx_tx_timeout(struct net_device *net, unsigned int txqueue) { struct lan78xx_net *dev = netdev_priv(net); diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c index f7d117d80cfb..8783e2ab3ec0 100644 --- a/drivers/net/usb/pegasus.c +++ b/drivers/net/usb/pegasus.c @@ -693,7 +693,7 @@ static void intr_callback(struct urb *urb) "can't resubmit interrupt urb, %d\n", res); } -static void pegasus_tx_timeout(struct net_device *net) +static void pegasus_tx_timeout(struct net_device *net, unsigned int txqueue) { pegasus_t *pegasus = netdev_priv(net); netif_warn(pegasus, timer, net, "tx timeout\n"); diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index c5ebf35d2488..9ec1da429514 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -2507,7 +2507,7 @@ static void rtl_drop_queued_tx(struct r8152 *tp) } } -static void rtl8152_tx_timeout(struct net_device *netdev) +static void rtl8152_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct r8152 *tp = netdev_priv(netdev); diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c index 13e51ccf0214..e7c630d37589 100644 --- a/drivers/net/usb/rtl8150.c +++ b/drivers/net/usb/rtl8150.c @@ -655,7 +655,7 @@ static void disable_net_traffic(rtl8150_t * dev) set_registers(dev, CR, 1, &cr); } -static void rtl8150_tx_timeout(struct net_device *netdev) +static void rtl8150_tx_timeout(struct net_device *netdev, unsigned int txqueue) { rtl8150_t *dev = netdev_priv(netdev); dev_warn(&netdev->dev, "Tx timeout.\n"); diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index 30e511c2c8d0..bc88923db369 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -1293,7 +1293,7 @@ static void tx_complete (struct urb *urb) /*-------------------------------------------------------------------------*/ -void usbnet_tx_timeout (struct net_device *net) +void usbnet_tx_timeout (struct net_device *net, unsigned int txqueue) { struct usbnet *dev = netdev_priv(net); diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 216acf37ca7c..18f152fa0068 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -3198,7 +3198,7 @@ vmxnet3_free_intr_resources(struct vmxnet3_adapter *adapter) static void -vmxnet3_tx_timeout(struct net_device *netdev) +vmxnet3_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct vmxnet3_adapter *adapter = netdev_priv(netdev); adapter->tx_timeout_count++; diff --git a/drivers/net/wan/cosa.c b/drivers/net/wan/cosa.c index af539151d663..5d6532ad6b78 100644 --- a/drivers/net/wan/cosa.c +++ b/drivers/net/wan/cosa.c @@ -268,7 +268,7 @@ static int cosa_net_attach(struct net_device *dev, unsigned short encoding, unsigned short parity); static int cosa_net_open(struct net_device *d); static int cosa_net_close(struct net_device *d); -static void cosa_net_timeout(struct net_device *d); +static void cosa_net_timeout(struct net_device *d, unsigned int txqueue); static netdev_tx_t cosa_net_tx(struct sk_buff *skb, struct net_device *d); static char *cosa_net_setup_rx(struct channel_data *channel, int size); static int cosa_net_rx_done(struct channel_data *channel); @@ -670,7 +670,7 @@ static netdev_tx_t cosa_net_tx(struct sk_buff *skb, return NETDEV_TX_OK; } -static void cosa_net_timeout(struct net_device *dev) +static void cosa_net_timeout(struct net_device *dev, unsigned int txqueue) { struct channel_data *chan = dev_to_chan(dev); diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c index 1901ec7948d8..7916efce7188 100644 --- a/drivers/net/wan/farsync.c +++ b/drivers/net/wan/farsync.c @@ -2239,7 +2239,7 @@ fst_attach(struct net_device *dev, unsigned short encoding, unsigned short parit } static void -fst_tx_timeout(struct net_device *dev) +fst_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct fst_port_info *port; struct fst_card_info *card; diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c index ca0f3be2b6bf..308384756e6f 100644 --- a/drivers/net/wan/fsl_ucc_hdlc.c +++ b/drivers/net/wan/fsl_ucc_hdlc.c @@ -1039,7 +1039,7 @@ static const struct dev_pm_ops uhdlc_pm_ops = { #define HDLC_PM_OPS NULL #endif -static void uhdlc_tx_timeout(struct net_device *ndev) +static void uhdlc_tx_timeout(struct net_device *ndev, unsigned int txqueue) { netdev_err(ndev, "%s\n", __func__); } diff --git a/drivers/net/wan/lmc/lmc_main.c b/drivers/net/wan/lmc/lmc_main.c index 0e6a51525d91..a20f467ca48a 100644 --- a/drivers/net/wan/lmc/lmc_main.c +++ b/drivers/net/wan/lmc/lmc_main.c @@ -99,7 +99,7 @@ static int lmc_ifdown(struct net_device * const); static void lmc_watchdog(struct timer_list *t); static void lmc_reset(lmc_softc_t * const sc); static void lmc_dec_reset(lmc_softc_t * const sc); -static void lmc_driver_timeout(struct net_device *dev); +static void lmc_driver_timeout(struct net_device *dev, unsigned int txqueue); /* * linux reserves 16 device specific IOCTLs. We call them @@ -2044,7 +2044,7 @@ static void lmc_initcsrs(lmc_softc_t * const sc, lmc_csrptr_t csr_base, /*fold00 lmc_trace(sc->lmc_device, "lmc_initcsrs out"); } -static void lmc_driver_timeout(struct net_device *dev) +static void lmc_driver_timeout(struct net_device *dev, unsigned int txqueue) { lmc_softc_t *sc = dev_to_sc(dev); u32 csr6; diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c index 914be5847386..69773d228ec1 100644 --- a/drivers/net/wan/x25_asy.c +++ b/drivers/net/wan/x25_asy.c @@ -276,7 +276,7 @@ static void x25_asy_write_wakeup(struct tty_struct *tty) sl->xhead += actual; } -static void x25_asy_timeout(struct net_device *dev) +static void x25_asy_timeout(struct net_device *dev, unsigned int txqueue) { struct x25_asy *sl = netdev_priv(dev); diff --git a/drivers/net/wimax/i2400m/netdev.c b/drivers/net/wimax/i2400m/netdev.c index a5db3c06b646..a7fcbceb6e6b 100644 --- a/drivers/net/wimax/i2400m/netdev.c +++ b/drivers/net/wimax/i2400m/netdev.c @@ -380,7 +380,7 @@ drop: static -void i2400m_tx_timeout(struct net_device *net_dev) +void i2400m_tx_timeout(struct net_device *net_dev, unsigned int txqueue) { /* * We might want to kick the device diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c index c4c83ab60cbc..25d7fd6e54bf 100644 --- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c +++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c @@ -5833,7 +5833,7 @@ static int ipw2100_close(struct net_device *dev) /* * TODO: Fix this function... its just wrong */ -static void ipw2100_tx_timeout(struct net_device *dev) +static void ipw2100_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ipw2100_priv *priv = libipw_priv(dev); diff --git a/drivers/net/wireless/intersil/hostap/hostap_main.c b/drivers/net/wireless/intersil/hostap/hostap_main.c index 05466281afb6..de97b3304115 100644 --- a/drivers/net/wireless/intersil/hostap/hostap_main.c +++ b/drivers/net/wireless/intersil/hostap/hostap_main.c @@ -761,7 +761,7 @@ static void hostap_set_multicast_list(struct net_device *dev) } -static void prism2_tx_timeout(struct net_device *dev) +static void prism2_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct hostap_interface *iface; local_info_t *local; diff --git a/drivers/net/wireless/intersil/orinoco/main.c b/drivers/net/wireless/intersil/orinoco/main.c index 28dac36d7c4c..00264a14e52c 100644 --- a/drivers/net/wireless/intersil/orinoco/main.c +++ b/drivers/net/wireless/intersil/orinoco/main.c @@ -647,7 +647,7 @@ static void __orinoco_ev_txexc(struct net_device *dev, struct hermes *hw) netif_wake_queue(dev); } -void orinoco_tx_timeout(struct net_device *dev) +void orinoco_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct orinoco_private *priv = ndev_priv(dev); struct net_device_stats *stats = &dev->stats; diff --git a/drivers/net/wireless/intersil/orinoco/orinoco.h b/drivers/net/wireless/intersil/orinoco/orinoco.h index 430862a6a24b..cdd026af100b 100644 --- a/drivers/net/wireless/intersil/orinoco/orinoco.h +++ b/drivers/net/wireless/intersil/orinoco/orinoco.h @@ -207,7 +207,7 @@ int orinoco_open(struct net_device *dev); int orinoco_stop(struct net_device *dev); void orinoco_set_multicast_list(struct net_device *dev); int orinoco_change_mtu(struct net_device *dev, int new_mtu); -void orinoco_tx_timeout(struct net_device *dev); +void orinoco_tx_timeout(struct net_device *dev, unsigned int txqueue); /********************************************************************/ /* Locking and synchronization functions */ diff --git a/drivers/net/wireless/intersil/prism54/islpci_eth.c b/drivers/net/wireless/intersil/prism54/islpci_eth.c index 2b8fb07d07e7..8d680250a281 100644 --- a/drivers/net/wireless/intersil/prism54/islpci_eth.c +++ b/drivers/net/wireless/intersil/prism54/islpci_eth.c @@ -473,7 +473,7 @@ islpci_do_reset_and_wake(struct work_struct *work) } void -islpci_eth_tx_timeout(struct net_device *ndev) +islpci_eth_tx_timeout(struct net_device *ndev, unsigned int txqueue) { islpci_private *priv = netdev_priv(ndev); diff --git a/drivers/net/wireless/intersil/prism54/islpci_eth.h b/drivers/net/wireless/intersil/prism54/islpci_eth.h index 61f4b43c6054..e433ccdc526b 100644 --- a/drivers/net/wireless/intersil/prism54/islpci_eth.h +++ b/drivers/net/wireless/intersil/prism54/islpci_eth.h @@ -53,7 +53,7 @@ struct avs_80211_1_header { void islpci_eth_cleanup_transmit(islpci_private *, isl38xx_control_block *); netdev_tx_t islpci_eth_transmit(struct sk_buff *, struct net_device *); int islpci_eth_receive(islpci_private *); -void islpci_eth_tx_timeout(struct net_device *); +void islpci_eth_tx_timeout(struct net_device *, unsigned int txqueue); void islpci_do_reset_and_wake(struct work_struct *); #endif /* _ISL_GEN_H */ diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c index d14e55e3c9da..7d94695e7961 100644 --- a/drivers/net/wireless/marvell/mwifiex/main.c +++ b/drivers/net/wireless/marvell/mwifiex/main.c @@ -1020,7 +1020,7 @@ static void mwifiex_set_multicast_list(struct net_device *dev) * CFG802.11 network device handler for transmission timeout. */ static void -mwifiex_tx_timeout(struct net_device *dev) +mwifiex_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev); diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c index 5fb598389487..648dfc38bd70 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/core.c +++ b/drivers/net/wireless/quantenna/qtnfmac/core.c @@ -156,7 +156,7 @@ static void qtnf_netdev_get_stats64(struct net_device *ndev, /* Netdev handler for transmission timeout. */ -static void qtnf_netdev_tx_timeout(struct net_device *ndev) +static void qtnf_netdev_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct qtnf_vif *vif = qtnf_netdev_get_priv(ndev); struct qtnf_wmac *mac; diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c index 007bf6803293..686161db8706 100644 --- a/drivers/net/wireless/wl3501_cs.c +++ b/drivers/net/wireless/wl3501_cs.c @@ -1285,7 +1285,7 @@ out: return rc; } -static void wl3501_tx_timeout(struct net_device *dev) +static void wl3501_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct net_device_stats *stats = &dev->stats; int rc; diff --git a/drivers/net/wireless/zydas/zd1201.c b/drivers/net/wireless/zydas/zd1201.c index 0db7362bedb4..41641fc2be74 100644 --- a/drivers/net/wireless/zydas/zd1201.c +++ b/drivers/net/wireless/zydas/zd1201.c @@ -830,7 +830,7 @@ static netdev_tx_t zd1201_hard_start_xmit(struct sk_buff *skb, return NETDEV_TX_OK; } -static void zd1201_tx_timeout(struct net_device *dev) +static void zd1201_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct zd1201 *zd = netdev_priv(dev); diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 871d44746f5c..a23875a4fa2b 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -1076,7 +1076,7 @@ void qeth_clear_working_pool_list(struct qeth_card *); void qeth_drain_output_queues(struct qeth_card *card); void qeth_setadp_promisc_mode(struct qeth_card *card, bool enable); int qeth_setadpparms_change_macaddr(struct qeth_card *); -void qeth_tx_timeout(struct net_device *); +void qeth_tx_timeout(struct net_device *, unsigned int txqueue); void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, u16 cmd_length); int qeth_query_switch_attributes(struct qeth_card *card, diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index b9a2349e4b90..ce7ff1abbef3 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -4325,7 +4325,7 @@ int qeth_set_access_ctrl_online(struct qeth_card *card, int fallback) return rc; } -void qeth_tx_timeout(struct net_device *dev) +void qeth_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct qeth_card *card; diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c index 3cffc8be6656..211dd4a11cac 100644 --- a/drivers/staging/ks7010/ks_wlan_net.c +++ b/drivers/staging/ks7010/ks_wlan_net.c @@ -45,7 +45,7 @@ struct wep_key { * function prototypes */ static int ks_wlan_open(struct net_device *dev); -static void ks_wlan_tx_timeout(struct net_device *dev); +static void ks_wlan_tx_timeout(struct net_device *dev, unsigned int txqueue); static int ks_wlan_start_xmit(struct sk_buff *skb, struct net_device *dev); static int ks_wlan_close(struct net_device *dev); static void ks_wlan_set_rx_mode(struct net_device *dev); @@ -2498,7 +2498,7 @@ int ks_wlan_set_mac_address(struct net_device *dev, void *addr) } static -void ks_wlan_tx_timeout(struct net_device *dev) +void ks_wlan_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ks_wlan_private *priv = netdev_priv(dev); diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c index 6ad4515311f7..24d20f000435 100644 --- a/drivers/staging/qlge/qlge_main.c +++ b/drivers/staging/qlge/qlge_main.c @@ -4274,7 +4274,7 @@ static int qlge_set_mac_address(struct net_device *ndev, void *p) return status; } -static void qlge_tx_timeout(struct net_device *ndev) +static void qlge_tx_timeout(struct net_device *ndev, unsigned int txqueue) { struct ql_adapter *qdev = netdev_priv(ndev); ql_queue_asic_error(qdev); diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c index dace81a7d1ba..a51d627284d1 100644 --- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c +++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c @@ -267,7 +267,7 @@ static short _rtl92e_check_nic_enough_desc(struct net_device *dev, int prio) return 0; } -static void _rtl92e_tx_timeout(struct net_device *dev) +static void _rtl92e_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct r8192_priv *priv = rtllib_priv(dev); diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c index 7e2cabd16e88..482382a887f8 100644 --- a/drivers/staging/rtl8192u/r8192U_core.c +++ b/drivers/staging/rtl8192u/r8192U_core.c @@ -640,7 +640,7 @@ short check_nic_enough_desc(struct net_device *dev, int queue_index) return (used < MAX_TX_URB); } -static void tx_timeout(struct net_device *dev) +static void tx_timeout(struct net_device *dev, unsigned int txqueue) { struct r8192_priv *priv = ieee80211_priv(dev); diff --git a/drivers/staging/unisys/visornic/visornic_main.c b/drivers/staging/unisys/visornic/visornic_main.c index 1d1440d43002..0433536930a9 100644 --- a/drivers/staging/unisys/visornic/visornic_main.c +++ b/drivers/staging/unisys/visornic/visornic_main.c @@ -1078,7 +1078,7 @@ out_save_flags: * Queue the work and return. Make sure we have not already been informed that * the IO Partition is gone; if so, we will have already timed-out the xmits. */ -static void visornic_xmit_timeout(struct net_device *netdev) +static void visornic_xmit_timeout(struct net_device *netdev, unsigned int txqueue) { struct visornic_devdata *devdata = netdev_priv(netdev); unsigned long flags; diff --git a/drivers/staging/wlan-ng/p80211netdev.c b/drivers/staging/wlan-ng/p80211netdev.c index a70fb84f38f1..b809c0015c0c 100644 --- a/drivers/staging/wlan-ng/p80211netdev.c +++ b/drivers/staging/wlan-ng/p80211netdev.c @@ -101,7 +101,7 @@ static void p80211knetdev_set_multicast_list(struct net_device *dev); static int p80211knetdev_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); static int p80211knetdev_set_mac_address(struct net_device *dev, void *addr); -static void p80211knetdev_tx_timeout(struct net_device *netdev); +static void p80211knetdev_tx_timeout(struct net_device *netdev, unsigned int txqueue); static int p80211_rx_typedrop(struct wlandevice *wlandev, u16 fc); int wlan_watchdog = 5000; @@ -1074,7 +1074,7 @@ static int p80211_rx_typedrop(struct wlandevice *wlandev, u16 fc) return drop; } -static void p80211knetdev_tx_timeout(struct net_device *netdev) +static void p80211knetdev_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct wlandevice *wlandev = netdev->ml_priv; diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c index 36a3eb4ad4c5..f1c90fa2978e 100644 --- a/drivers/tty/n_gsm.c +++ b/drivers/tty/n_gsm.c @@ -2704,7 +2704,7 @@ static netdev_tx_t gsm_mux_net_start_xmit(struct sk_buff *skb, } /* called when a packet did not ack after watchdogtimeout */ -static void gsm_mux_net_tx_timeout(struct net_device *net) +static void gsm_mux_net_tx_timeout(struct net_device *net, unsigned int txqueue) { /* Tell syslog we are hosed. */ dev_dbg(&net->dev, "Tx timed out.\n"); diff --git a/drivers/tty/synclink.c b/drivers/tty/synclink.c index 84f26e43b229..61dc6b4a43d0 100644 --- a/drivers/tty/synclink.c +++ b/drivers/tty/synclink.c @@ -7837,7 +7837,7 @@ static int hdlcdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) * * dev pointer to network device structure */ -static void hdlcdev_tx_timeout(struct net_device *dev) +static void hdlcdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct mgsl_struct *info = dev_to_port(dev); unsigned long flags; diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c index e8a9047de451..5d59e2369c8a 100644 --- a/drivers/tty/synclink_gt.c +++ b/drivers/tty/synclink_gt.c @@ -1682,7 +1682,7 @@ static int hdlcdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) * * dev pointer to network device structure */ -static void hdlcdev_tx_timeout(struct net_device *dev) +static void hdlcdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct slgt_info *info = dev_to_port(dev); unsigned long flags; diff --git a/drivers/tty/synclinkmp.c b/drivers/tty/synclinkmp.c index fcb91bf7a15b..33181fa6eb18 100644 --- a/drivers/tty/synclinkmp.c +++ b/drivers/tty/synclinkmp.c @@ -1807,7 +1807,7 @@ static int hdlcdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) * * dev pointer to network device structure */ -static void hdlcdev_tx_timeout(struct net_device *dev) +static void hdlcdev_tx_timeout(struct net_device *dev, unsigned int txqueue) { SLMP_INFO *info = dev_to_port(dev); unsigned long flags; diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 9ef20389622d..30745068fb39 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1014,7 +1014,7 @@ int netdev_name_node_alt_destroy(struct net_device *dev, const char *name); * Called when a user wants to change the Maximum Transfer Unit * of a device. * - * void (*ndo_tx_timeout)(struct net_device *dev); + * void (*ndo_tx_timeout)(struct net_device *dev, unsigned int txqueue); * Callback used when the transmitter has not made any progress * for dev->watchdog ticks. * @@ -1281,7 +1281,8 @@ struct net_device_ops { int new_mtu); int (*ndo_neigh_setup)(struct net_device *dev, struct neigh_parms *); - void (*ndo_tx_timeout) (struct net_device *dev); + void (*ndo_tx_timeout) (struct net_device *dev, + unsigned int txqueue); void (*ndo_get_stats64)(struct net_device *dev, struct rtnl_link_stats64 *storage); diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h index d8860f2d0976..b0bff3083278 100644 --- a/include/linux/usb/usbnet.h +++ b/include/linux/usb/usbnet.h @@ -253,7 +253,7 @@ extern int usbnet_open(struct net_device *net); extern int usbnet_stop(struct net_device *net); extern netdev_tx_t usbnet_start_xmit(struct sk_buff *skb, struct net_device *net); -extern void usbnet_tx_timeout(struct net_device *net); +extern void usbnet_tx_timeout(struct net_device *net, unsigned int txqueue); extern int usbnet_change_mtu(struct net_device *net, int new_mtu); extern int usbnet_get_endpoints(struct usbnet *, struct usb_interface *); diff --git a/net/atm/lec.c b/net/atm/lec.c index 5a77c235a212..b57368e70aab 100644 --- a/net/atm/lec.c +++ b/net/atm/lec.c @@ -194,7 +194,7 @@ lec_send(struct atm_vcc *vcc, struct sk_buff *skb) dev->stats.tx_bytes += skb->len; } -static void lec_tx_timeout(struct net_device *dev) +static void lec_tx_timeout(struct net_device *dev, unsigned int txqueue) { pr_info("%s\n", dev->name); netif_trans_update(dev); diff --git a/net/bluetooth/bnep/netdev.c b/net/bluetooth/bnep/netdev.c index 1d4d7d415730..cc1cff63194f 100644 --- a/net/bluetooth/bnep/netdev.c +++ b/net/bluetooth/bnep/netdev.c @@ -112,7 +112,7 @@ static int bnep_net_set_mac_addr(struct net_device *dev, void *arg) return 0; } -static void bnep_net_timeout(struct net_device *dev) +static void bnep_net_timeout(struct net_device *dev, unsigned int txqueue) { BT_DBG("net_timeout"); netif_wake_queue(dev); diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 5ab696efca95..6c9595f1048a 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -441,7 +441,7 @@ static void dev_watchdog(struct timer_list *t) trace_net_dev_xmit_timeout(dev, i); WARN_ONCE(1, KERN_INFO "NETDEV WATCHDOG: %s (%s): transmit queue %u timed out\n", dev->name, netdev_drivername(dev), i); - dev->netdev_ops->ndo_tx_timeout(dev); + dev->netdev_ops->ndo_tx_timeout(dev, i); } if (!mod_timer(&dev->watchdog_timer, round_jiffies(jiffies + -- cgit v1.2.3-71-gd317 From 98e8627efcada18ac043a77b9101b4b4c768090b Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Fri, 13 Dec 2019 18:51:07 +0100 Subject: bpf: Move trampoline JIT image allocation to a function MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Refactor the image allocation in the BPF trampoline code into a separate function, so it can be shared with the BPF dispatcher in upcoming commits. Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20191213175112.30208-2-bjorn.topel@gmail.com --- include/linux/bpf.h | 1 + kernel/bpf/trampoline.c | 24 +++++++++++++++++------- 2 files changed, 18 insertions(+), 7 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 35903f148be5..5d744828b399 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -475,6 +475,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); +void *bpf_jit_alloc_exec_page(void); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 7e89f1f49d77..5ee301ddbd00 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -13,6 +13,22 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE]; /* serializes access to trampoline_table */ static DEFINE_MUTEX(trampoline_mutex); +void *bpf_jit_alloc_exec_page(void) +{ + void *image; + + image = bpf_jit_alloc_exec(PAGE_SIZE); + if (!image) + return NULL; + + set_vm_flush_reset_perms(image); + /* Keep image as writeable. The alternative is to keep flipping ro/rw + * everytime new program is attached or detached. + */ + set_memory_x((long)image, 1); + return image; +} + struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; @@ -33,7 +49,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) goto out; /* is_root was checked earlier. No need for bpf_jit_charge_modmem() */ - image = bpf_jit_alloc_exec(PAGE_SIZE); + image = bpf_jit_alloc_exec_page(); if (!image) { kfree(tr); tr = NULL; @@ -47,12 +63,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) mutex_init(&tr->mutex); for (i = 0; i < BPF_TRAMP_MAX; i++) INIT_HLIST_HEAD(&tr->progs_hlist[i]); - - set_vm_flush_reset_perms(image); - /* Keep image as writeable. The alternative is to keep flipping ro/rw - * everytime new program is attached or detached. - */ - set_memory_x((long)image, 1); tr->image = image; out: mutex_unlock(&trampoline_mutex); -- cgit v1.2.3-71-gd317 From 75ccbef6369e94ecac696a152a998a978d41376b Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Fri, 13 Dec 2019 18:51:08 +0100 Subject: bpf: Introduce BPF dispatcher MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The BPF dispatcher is a multi-way branch code generator, mainly targeted for XDP programs. When an XDP program is executed via the bpf_prog_run_xdp(), it is invoked via an indirect call. The indirect call has a substantial performance impact, when retpolines are enabled. The dispatcher transform indirect calls to direct calls, and therefore avoids the retpoline. The dispatcher is generated using the BPF JIT, and relies on text poking provided by bpf_arch_text_poke(). The dispatcher hijacks a trampoline function it via the __fentry__ nop of the trampoline. One dispatcher instance currently supports up to 64 dispatch points. A user creates a dispatcher with its corresponding trampoline with the DEFINE_BPF_DISPATCHER macro. Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20191213175112.30208-3-bjorn.topel@gmail.com --- arch/x86/net/bpf_jit_comp.c | 122 ++++++++++++++++++++++++++++++++++ include/linux/bpf.h | 56 ++++++++++++++++ kernel/bpf/Makefile | 1 + kernel/bpf/dispatcher.c | 158 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 337 insertions(+) create mode 100644 kernel/bpf/dispatcher.c (limited to 'include') diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index b8be18427277..3ce7ad41bd6f 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -10,10 +10,12 @@ #include #include #include +#include #include #include #include #include +#include static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len) { @@ -1530,6 +1532,126 @@ int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags return 0; } +static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) +{ + u8 *prog = *pprog; + int cnt = 0; + s64 offset; + + offset = func - (ip + 2 + 4); + if (!is_simm32(offset)) { + pr_err("Target %p is out of range\n", func); + return -EINVAL; + } + EMIT2_off32(0x0F, jmp_cond + 0x10, offset); + *pprog = prog; + return 0; +} + +static int emit_fallback_jump(u8 **pprog) +{ + u8 *prog = *pprog; + int err = 0; + +#ifdef CONFIG_RETPOLINE + /* Note that this assumes the the compiler uses external + * thunks for indirect calls. Both clang and GCC use the same + * naming convention for external thunks. + */ + err = emit_jump(&prog, __x86_indirect_thunk_rdx, prog); +#else + int cnt = 0; + + EMIT2(0xFF, 0xE2); /* jmp rdx */ +#endif + *pprog = prog; + return err; +} + +static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs) +{ + int pivot, err, jg_bytes = 1, cnt = 0; + u8 *jg_reloc, *prog = *pprog; + s64 jg_offset; + + if (a == b) { + /* Leaf node of recursion, i.e. not a range of indices + * anymore. + */ + EMIT1(add_1mod(0x48, BPF_REG_3)); /* cmp rdx,func */ + if (!is_simm32(progs[a])) + return -1; + EMIT2_off32(0x81, add_1reg(0xF8, BPF_REG_3), + progs[a]); + err = emit_cond_near_jump(&prog, /* je func */ + (void *)progs[a], prog, + X86_JE); + if (err) + return err; + + err = emit_fallback_jump(&prog); /* jmp thunk/indirect */ + if (err) + return err; + + *pprog = prog; + return 0; + } + + /* Not a leaf node, so we pivot, and recursively descend into + * the lower and upper ranges. + */ + pivot = (b - a) / 2; + EMIT1(add_1mod(0x48, BPF_REG_3)); /* cmp rdx,func */ + if (!is_simm32(progs[a + pivot])) + return -1; + EMIT2_off32(0x81, add_1reg(0xF8, BPF_REG_3), progs[a + pivot]); + + if (pivot > 2) { /* jg upper_part */ + /* Require near jump. */ + jg_bytes = 4; + EMIT2_off32(0x0F, X86_JG + 0x10, 0); + } else { + EMIT2(X86_JG, 0); + } + jg_reloc = prog; + + err = emit_bpf_dispatcher(&prog, a, a + pivot, /* emit lower_part */ + progs); + if (err) + return err; + + jg_offset = prog - jg_reloc; + emit_code(jg_reloc - jg_bytes, jg_offset, jg_bytes); + + err = emit_bpf_dispatcher(&prog, a + pivot + 1, /* emit upper_part */ + b, progs); + if (err) + return err; + + *pprog = prog; + return 0; +} + +static int cmp_ips(const void *a, const void *b) +{ + const s64 *ipa = a; + const s64 *ipb = b; + + if (*ipa > *ipb) + return 1; + if (*ipa < *ipb) + return -1; + return 0; +} + +int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs) +{ + u8 *prog = image; + + sort(funcs, num_funcs, sizeof(funcs[0]), cmp_ips, NULL); + return emit_bpf_dispatcher(&prog, 0, num_funcs - 1, funcs); +} + struct x64_jit_data { struct bpf_binary_header *header; int *addrs; diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 5d744828b399..53ae4a50abe4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -470,12 +470,61 @@ struct bpf_trampoline { void *image; u64 selector; }; + +#define BPF_DISPATCHER_MAX 64 /* Fits in 2048B */ + +struct bpf_dispatcher_prog { + struct bpf_prog *prog; + refcount_t users; +}; + +struct bpf_dispatcher { + /* dispatcher mutex */ + struct mutex mutex; + void *func; + struct bpf_dispatcher_prog progs[BPF_DISPATCHER_MAX]; + int num_progs; + void *image; + u32 image_off; +}; + #ifdef CONFIG_BPF_JIT struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); void *bpf_jit_alloc_exec_page(void); +#define BPF_DISPATCHER_INIT(name) { \ + .mutex = __MUTEX_INITIALIZER(name.mutex), \ + .func = &name##func, \ + .progs = {}, \ + .num_progs = 0, \ + .image = NULL, \ + .image_off = 0 \ +} + +#define DEFINE_BPF_DISPATCHER(name) \ + noinline unsigned int name##func( \ + const void *ctx, \ + const struct bpf_insn *insnsi, \ + unsigned int (*bpf_func)(const void *, \ + const struct bpf_insn *)) \ + { \ + return bpf_func(ctx, insnsi); \ + } \ + EXPORT_SYMBOL(name##func); \ + struct bpf_dispatcher name = BPF_DISPATCHER_INIT(name); +#define DECLARE_BPF_DISPATCHER(name) \ + unsigned int name##func( \ + const void *ctx, \ + const struct bpf_insn *insnsi, \ + unsigned int (*bpf_func)(const void *, \ + const struct bpf_insn *)); \ + extern struct bpf_dispatcher name; +#define BPF_DISPATCHER_FUNC(name) name##func +#define BPF_DISPATCHER_PTR(name) (&name) +void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, + struct bpf_prog *to); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { @@ -490,6 +539,13 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog) return -ENOTSUPP; } static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} +#define DEFINE_BPF_DISPATCHER(name) +#define DECLARE_BPF_DISPATCHER(name) +#define BPF_DISPATCHER_FUNC(name) bpf_dispatcher_nopfunc +#define BPF_DISPATCHER_PTR(name) NULL +static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, + struct bpf_prog *from, + struct bpf_prog *to) {} #endif struct bpf_func_info_aux { diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 3f671bf617e8..d4f330351f87 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o obj-$(CONFIG_BPF_SYSCALL) += disasm.o obj-$(CONFIG_BPF_JIT) += trampoline.o obj-$(CONFIG_BPF_SYSCALL) += btf.o +obj-$(CONFIG_BPF_JIT) += dispatcher.o ifeq ($(CONFIG_NET),y) obj-$(CONFIG_BPF_SYSCALL) += devmap.o obj-$(CONFIG_BPF_SYSCALL) += cpumap.o diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c new file mode 100644 index 000000000000..204ee61a3904 --- /dev/null +++ b/kernel/bpf/dispatcher.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2019 Intel Corporation. */ + +#include +#include +#include + +/* The BPF dispatcher is a multiway branch code generator. The + * dispatcher is a mechanism to avoid the performance penalty of an + * indirect call, which is expensive when retpolines are enabled. A + * dispatch client registers a BPF program into the dispatcher, and if + * there is available room in the dispatcher a direct call to the BPF + * program will be generated. All calls to the BPF programs called via + * the dispatcher will then be a direct call, instead of an + * indirect. The dispatcher hijacks a trampoline function it via the + * __fentry__ of the trampoline. The trampoline function has the + * following signature: + * + * unsigned int trampoline(const void *ctx, const struct bpf_insn *insnsi, + * unsigned int (*bpf_func)(const void *, + * const struct bpf_insn *)); + */ + +static struct bpf_dispatcher_prog *bpf_dispatcher_find_prog( + struct bpf_dispatcher *d, struct bpf_prog *prog) +{ + int i; + + for (i = 0; i < BPF_DISPATCHER_MAX; i++) { + if (prog == d->progs[i].prog) + return &d->progs[i]; + } + return NULL; +} + +static struct bpf_dispatcher_prog *bpf_dispatcher_find_free( + struct bpf_dispatcher *d) +{ + return bpf_dispatcher_find_prog(d, NULL); +} + +static bool bpf_dispatcher_add_prog(struct bpf_dispatcher *d, + struct bpf_prog *prog) +{ + struct bpf_dispatcher_prog *entry; + + if (!prog) + return false; + + entry = bpf_dispatcher_find_prog(d, prog); + if (entry) { + refcount_inc(&entry->users); + return false; + } + + entry = bpf_dispatcher_find_free(d); + if (!entry) + return false; + + bpf_prog_inc(prog); + entry->prog = prog; + refcount_set(&entry->users, 1); + d->num_progs++; + return true; +} + +static bool bpf_dispatcher_remove_prog(struct bpf_dispatcher *d, + struct bpf_prog *prog) +{ + struct bpf_dispatcher_prog *entry; + + if (!prog) + return false; + + entry = bpf_dispatcher_find_prog(d, prog); + if (!entry) + return false; + + if (refcount_dec_and_test(&entry->users)) { + entry->prog = NULL; + bpf_prog_put(prog); + d->num_progs--; + return true; + } + return false; +} + +int __weak arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs) +{ + return -ENOTSUPP; +} + +static int bpf_dispatcher_prepare(struct bpf_dispatcher *d, void *image) +{ + s64 ips[BPF_DISPATCHER_MAX] = {}, *ipsp = &ips[0]; + int i; + + for (i = 0; i < BPF_DISPATCHER_MAX; i++) { + if (d->progs[i].prog) + *ipsp++ = (s64)(uintptr_t)d->progs[i].prog->bpf_func; + } + return arch_prepare_bpf_dispatcher(image, &ips[0], d->num_progs); +} + +static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs) +{ + void *old, *new; + u32 noff; + int err; + + if (!prev_num_progs) { + old = NULL; + noff = 0; + } else { + old = d->image + d->image_off; + noff = d->image_off ^ (PAGE_SIZE / 2); + } + + new = d->num_progs ? d->image + noff : NULL; + if (new) { + if (bpf_dispatcher_prepare(d, new)) + return; + } + + err = bpf_arch_text_poke(d->func, BPF_MOD_JUMP, old, new); + if (err || !new) + return; + + d->image_off = noff; +} + +void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, + struct bpf_prog *to) +{ + bool changed = false; + int prev_num_progs; + + if (from == to) + return; + + mutex_lock(&d->mutex); + if (!d->image) { + d->image = bpf_jit_alloc_exec_page(); + if (!d->image) + goto out; + } + + prev_num_progs = d->num_progs; + changed |= bpf_dispatcher_remove_prog(d, from); + changed |= bpf_dispatcher_add_prog(d, to); + + if (!changed) + goto out; + + bpf_dispatcher_update(d, prev_num_progs); +out: + mutex_unlock(&d->mutex); +} -- cgit v1.2.3-71-gd317 From 7e6897f95935973c3253fd756135b5ea58043dc8 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Fri, 13 Dec 2019 18:51:09 +0100 Subject: bpf, xdp: Start using the BPF dispatcher for XDP MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This commit adds a BPF dispatcher for XDP. The dispatcher is updated from the XDP control-path, dev_xdp_install(), and used when an XDP program is run via bpf_prog_run_xdp(). Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20191213175112.30208-4-bjorn.topel@gmail.com --- include/linux/bpf.h | 15 +++++++++++++++ include/linux/filter.h | 40 ++++++++++++++++++++++++---------------- kernel/bpf/syscall.c | 26 ++++++++++++++++++-------- net/core/dev.c | 19 ++++++++++++++++++- net/core/filter.c | 8 ++++++++ 5 files changed, 83 insertions(+), 25 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 53ae4a50abe4..5970989b99d1 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -488,6 +488,14 @@ struct bpf_dispatcher { u32 image_off; }; +static __always_inline unsigned int bpf_dispatcher_nopfunc( + const void *ctx, + const struct bpf_insn *insnsi, + unsigned int (*bpf_func)(const void *, + const struct bpf_insn *)) +{ + return bpf_func(ctx, insnsi); +} #ifdef CONFIG_BPF_JIT struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); @@ -997,6 +1005,8 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog); +struct bpf_prog *bpf_prog_by_id(u32 id); + #else /* !CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get(u32 ufd) { @@ -1128,6 +1138,11 @@ static inline int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, static inline void bpf_map_put(struct bpf_map *map) { } + +static inline struct bpf_prog *bpf_prog_by_id(u32 id) +{ + return ERR_PTR(-ENOTSUPP); +} #endif /* CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get_type(u32 ufd, diff --git a/include/linux/filter.h b/include/linux/filter.h index a141cb07e76a..37ac7025031d 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -559,23 +559,26 @@ struct sk_filter { DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); -#define BPF_PROG_RUN(prog, ctx) ({ \ - u32 ret; \ - cant_sleep(); \ - if (static_branch_unlikely(&bpf_stats_enabled_key)) { \ - struct bpf_prog_stats *stats; \ - u64 start = sched_clock(); \ - ret = (*(prog)->bpf_func)(ctx, (prog)->insnsi); \ - stats = this_cpu_ptr(prog->aux->stats); \ - u64_stats_update_begin(&stats->syncp); \ - stats->cnt++; \ - stats->nsecs += sched_clock() - start; \ - u64_stats_update_end(&stats->syncp); \ - } else { \ - ret = (*(prog)->bpf_func)(ctx, (prog)->insnsi); \ - } \ +#define __BPF_PROG_RUN(prog, ctx, dfunc) ({ \ + u32 ret; \ + cant_sleep(); \ + if (static_branch_unlikely(&bpf_stats_enabled_key)) { \ + struct bpf_prog_stats *stats; \ + u64 start = sched_clock(); \ + ret = dfunc(ctx, (prog)->insnsi, (prog)->bpf_func); \ + stats = this_cpu_ptr(prog->aux->stats); \ + u64_stats_update_begin(&stats->syncp); \ + stats->cnt++; \ + stats->nsecs += sched_clock() - start; \ + u64_stats_update_end(&stats->syncp); \ + } else { \ + ret = dfunc(ctx, (prog)->insnsi, (prog)->bpf_func); \ + } \ ret; }) +#define BPF_PROG_RUN(prog, ctx) __BPF_PROG_RUN(prog, ctx, \ + bpf_dispatcher_nopfunc) + #define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN struct bpf_skb_data_end { @@ -699,6 +702,8 @@ static inline u32 bpf_prog_run_clear_cb(const struct bpf_prog *prog, return res; } +DECLARE_BPF_DISPATCHER(bpf_dispatcher_xdp) + static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, struct xdp_buff *xdp) { @@ -708,9 +713,12 @@ static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog, * already takes rcu_read_lock() when fetching the program, so * it's not necessary here anymore. */ - return BPF_PROG_RUN(prog, xdp); + return __BPF_PROG_RUN(prog, xdp, + BPF_DISPATCHER_FUNC(bpf_dispatcher_xdp)); } +void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog); + static inline u32 bpf_prog_insn_size(const struct bpf_prog *prog) { return prog->len * sizeof(struct bpf_insn); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 66b90eaf99fe..b08c362f4e02 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2338,17 +2338,12 @@ static int bpf_obj_get_next_id(const union bpf_attr *attr, #define BPF_PROG_GET_FD_BY_ID_LAST_FIELD prog_id -static int bpf_prog_get_fd_by_id(const union bpf_attr *attr) +struct bpf_prog *bpf_prog_by_id(u32 id) { struct bpf_prog *prog; - u32 id = attr->prog_id; - int fd; - - if (CHECK_ATTR(BPF_PROG_GET_FD_BY_ID)) - return -EINVAL; - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; + if (!id) + return ERR_PTR(-ENOENT); spin_lock_bh(&prog_idr_lock); prog = idr_find(&prog_idr, id); @@ -2357,7 +2352,22 @@ static int bpf_prog_get_fd_by_id(const union bpf_attr *attr) else prog = ERR_PTR(-ENOENT); spin_unlock_bh(&prog_idr_lock); + return prog; +} + +static int bpf_prog_get_fd_by_id(const union bpf_attr *attr) +{ + struct bpf_prog *prog; + u32 id = attr->prog_id; + int fd; + + if (CHECK_ATTR(BPF_PROG_GET_FD_BY_ID)) + return -EINVAL; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + prog = bpf_prog_by_id(id); if (IS_ERR(prog)) return PTR_ERR(prog); diff --git a/net/core/dev.c b/net/core/dev.c index 2c277b8aba38..255d3cf35360 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -8542,7 +8542,17 @@ static int dev_xdp_install(struct net_device *dev, bpf_op_t bpf_op, struct netlink_ext_ack *extack, u32 flags, struct bpf_prog *prog) { + bool non_hw = !(flags & XDP_FLAGS_HW_MODE); + struct bpf_prog *prev_prog = NULL; struct netdev_bpf xdp; + int err; + + if (non_hw) { + prev_prog = bpf_prog_by_id(__dev_xdp_query(dev, bpf_op, + XDP_QUERY_PROG)); + if (IS_ERR(prev_prog)) + prev_prog = NULL; + } memset(&xdp, 0, sizeof(xdp)); if (flags & XDP_FLAGS_HW_MODE) @@ -8553,7 +8563,14 @@ static int dev_xdp_install(struct net_device *dev, bpf_op_t bpf_op, xdp.flags = flags; xdp.prog = prog; - return bpf_op(dev, &xdp); + err = bpf_op(dev, &xdp); + if (!err && non_hw) + bpf_prog_change_xdp(prev_prog, prog); + + if (prev_prog) + bpf_prog_put(prev_prog); + + return err; } static void dev_xdp_uninstall(struct net_device *dev) diff --git a/net/core/filter.c b/net/core/filter.c index f1e703eed3d2..a411f7835dee 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8940,3 +8940,11 @@ const struct bpf_verifier_ops sk_reuseport_verifier_ops = { const struct bpf_prog_ops sk_reuseport_prog_ops = { }; #endif /* CONFIG_INET */ + +DEFINE_BPF_DISPATCHER(bpf_dispatcher_xdp) + +void bpf_prog_change_xdp(struct bpf_prog *prev_prog, struct bpf_prog *prog) +{ + bpf_dispatcher_change_prog(BPF_DISPATCHER_PTR(bpf_dispatcher_xdp), + prev_prog, prog); +} -- cgit v1.2.3-71-gd317 From 116eb788f57c9c35c40b29cfaa2607020de99a84 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Fri, 13 Dec 2019 18:51:12 +0100 Subject: bpf, x86: Align dispatcher branch targets to 16B MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit >From Intel 64 and IA-32 Architectures Optimization Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler Coding Rule 11: All branch targets should be 16-byte aligned. This commits aligns branch targets according to the Intel manual. The nops used to align branch targets make the dispatcher larger, and therefore the number of supported dispatch points/programs are descreased from 64 to 48. Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20191213175112.30208-7-bjorn.topel@gmail.com --- arch/x86/net/bpf_jit_comp.c | 30 +++++++++++++++++++++++++++++- include/linux/bpf.h | 2 +- 2 files changed, 30 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 3ce7ad41bd6f..4c8a2d1f8470 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1548,6 +1548,26 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) return 0; } +static void emit_nops(u8 **pprog, unsigned int len) +{ + unsigned int i, noplen; + u8 *prog = *pprog; + int cnt = 0; + + while (len > 0) { + noplen = len; + + if (noplen > ASM_NOP_MAX) + noplen = ASM_NOP_MAX; + + for (i = 0; i < noplen; i++) + EMIT1(ideal_nops[noplen][i]); + len -= noplen; + } + + *pprog = prog; +} + static int emit_fallback_jump(u8 **pprog) { u8 *prog = *pprog; @@ -1570,8 +1590,8 @@ static int emit_fallback_jump(u8 **pprog) static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs) { + u8 *jg_reloc, *jg_target, *prog = *pprog; int pivot, err, jg_bytes = 1, cnt = 0; - u8 *jg_reloc, *prog = *pprog; s64 jg_offset; if (a == b) { @@ -1620,6 +1640,14 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs) if (err) return err; + /* From Intel 64 and IA-32 Architectures Optimization + * Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler + * Coding Rule 11: All branch targets should be 16-byte + * aligned. + */ + jg_target = PTR_ALIGN(prog, 16); + if (jg_target != prog) + emit_nops(&prog, jg_target - prog); jg_offset = prog - jg_reloc; emit_code(jg_reloc - jg_bytes, jg_offset, jg_bytes); diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 5970989b99d1..d467983e61bb 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -471,7 +471,7 @@ struct bpf_trampoline { u64 selector; }; -#define BPF_DISPATCHER_MAX 64 /* Fits in 2048B */ +#define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */ struct bpf_dispatcher_prog { struct bpf_prog *prog; -- cgit v1.2.3-71-gd317 From 826f66b30c2e3d4df533e4224cb8c734afa0a17c Mon Sep 17 00:00:00 2001 From: Andy Roulin Date: Wed, 11 Dec 2019 14:30:58 -0800 Subject: bonding: move 802.3ad port state flags to uapi The bond slave actor/partner operating state is exported as bitfield to userspace, which lacks a way to interpret it, e.g., iproute2 only prints the state as a number: ad_actor_oper_port_state 15 For userspace to interpret the bitfield, the bitfield definitions should be part of the uapi. The bitfield itself is defined in the 802.3ad standard. This commit moves the 802.3ad bitfield definitions to uapi. Related iproute2 patches, soon to be posted upstream, use the new uapi headers to pretty-print bond slave state, e.g., with ip -d link show ad_actor_oper_port_state_str Signed-off-by: Andy Roulin Acked-by: Roopa Prabhu Acked-by: Jay Vosburgh Signed-off-by: Jakub Kicinski --- drivers/net/bonding/bond_3ad.c | 10 ---------- include/uapi/linux/if_bonding.h | 10 ++++++++++ 2 files changed, 10 insertions(+), 10 deletions(-) (limited to 'include') diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index e3b25f310936..34bfe99641a3 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -31,16 +31,6 @@ #define AD_CHURN_DETECTION_TIME 60 #define AD_AGGREGATE_WAIT_TIME 2 -/* Port state definitions (43.4.2.2 in the 802.3ad standard) */ -#define AD_STATE_LACP_ACTIVITY 0x1 -#define AD_STATE_LACP_TIMEOUT 0x2 -#define AD_STATE_AGGREGATION 0x4 -#define AD_STATE_SYNCHRONIZATION 0x8 -#define AD_STATE_COLLECTING 0x10 -#define AD_STATE_DISTRIBUTING 0x20 -#define AD_STATE_DEFAULTED 0x40 -#define AD_STATE_EXPIRED 0x80 - /* Port Variables definitions used by the State Machines (43.4.7 in the * 802.3ad standard) */ diff --git a/include/uapi/linux/if_bonding.h b/include/uapi/linux/if_bonding.h index 790585f0e61b..6829213a54c5 100644 --- a/include/uapi/linux/if_bonding.h +++ b/include/uapi/linux/if_bonding.h @@ -95,6 +95,16 @@ #define BOND_XMIT_POLICY_ENCAP23 3 /* encapsulated layer 2+3 */ #define BOND_XMIT_POLICY_ENCAP34 4 /* encapsulated layer 3+4 */ +/* 802.3ad port state definitions (43.4.2.2 in the 802.3ad standard) */ +#define AD_STATE_LACP_ACTIVITY 0x1 +#define AD_STATE_LACP_TIMEOUT 0x2 +#define AD_STATE_AGGREGATION 0x4 +#define AD_STATE_SYNCHRONIZATION 0x8 +#define AD_STATE_COLLECTING 0x10 +#define AD_STATE_DISTRIBUTING 0x20 +#define AD_STATE_DEFAULTED 0x40 +#define AD_STATE_EXPIRED 0x80 + typedef struct ifbond { __s32 bond_mode; __s32 num_slaves; -- cgit v1.2.3-71-gd317 From de1799667b002331609e8ee6b924ccfd51968d99 Mon Sep 17 00:00:00 2001 From: Vivien Didelot Date: Wed, 11 Dec 2019 20:07:10 -0500 Subject: net: bridge: add STP xstats This adds rx_bpdu, tx_bpdu, rx_tcn, tx_tcn, transition_blk, transition_fwd xstats counters to the bridge ports copied over via netlink, providing useful information for STP. Signed-off-by: Vivien Didelot Acked-by: Nikolay Aleksandrov Signed-off-by: Jakub Kicinski --- include/uapi/linux/if_bridge.h | 10 ++++++++++ net/bridge/br_netlink.c | 13 +++++++++++++ net/bridge/br_private.h | 2 ++ net/bridge/br_stp.c | 15 +++++++++++++++ net/bridge/br_stp_bpdu.c | 4 ++++ 5 files changed, 44 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 1b3c2b643a02..4a58e3d7de46 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -156,6 +156,15 @@ struct bridge_vlan_xstats { __u32 pad2; }; +struct bridge_stp_xstats { + __u64 transition_blk; + __u64 transition_fwd; + __u64 rx_bpdu; + __u64 tx_bpdu; + __u64 rx_tcn; + __u64 tx_tcn; +}; + /* Bridge multicast database attributes * [MDBA_MDB] = { * [MDBA_MDB_ENTRY] = { @@ -262,6 +271,7 @@ enum { BRIDGE_XSTATS_VLAN, BRIDGE_XSTATS_MCAST, BRIDGE_XSTATS_PAD, + BRIDGE_XSTATS_STP, __BRIDGE_XSTATS_MAX }; #define BRIDGE_XSTATS_MAX (__BRIDGE_XSTATS_MAX - 1) diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index a0a54482aabc..60136575aea4 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -1607,6 +1607,19 @@ static int br_fill_linkxstats(struct sk_buff *skb, br_multicast_get_stats(br, p, nla_data(nla)); } #endif + + if (p) { + nla = nla_reserve_64bit(skb, BRIDGE_XSTATS_STP, + sizeof(p->stp_xstats), + BRIDGE_XSTATS_PAD); + if (!nla) + goto nla_put_failure; + + spin_lock_bh(&br->lock); + memcpy(nla_data(nla), &p->stp_xstats, sizeof(p->stp_xstats)); + spin_unlock_bh(&br->lock); + } + nla_nest_end(skb, nest); *prividx = 0; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 36b0367ca1e0..f540f3bdf294 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -283,6 +283,8 @@ struct net_bridge_port { #endif u16 group_fwd_mask; u16 backup_redirected_cnt; + + struct bridge_stp_xstats stp_xstats; }; #define kobj_to_brport(obj) container_of(obj, struct net_bridge_port, kobj) diff --git a/net/bridge/br_stp.c b/net/bridge/br_stp.c index 1f1410f8d312..6856a6d9282b 100644 --- a/net/bridge/br_stp.c +++ b/net/bridge/br_stp.c @@ -45,6 +45,17 @@ void br_set_state(struct net_bridge_port *p, unsigned int state) br_info(p->br, "port %u(%s) entered %s state\n", (unsigned int) p->port_no, p->dev->name, br_port_state_names[p->state]); + + if (p->br->stp_enabled == BR_KERNEL_STP) { + switch (p->state) { + case BR_STATE_BLOCKING: + p->stp_xstats.transition_blk++; + break; + case BR_STATE_FORWARDING: + p->stp_xstats.transition_fwd++; + break; + } + } } /* called under bridge lock */ @@ -484,6 +495,8 @@ void br_received_config_bpdu(struct net_bridge_port *p, struct net_bridge *br; int was_root; + p->stp_xstats.rx_bpdu++; + br = p->br; was_root = br_is_root_bridge(br); @@ -517,6 +530,8 @@ void br_received_config_bpdu(struct net_bridge_port *p, /* called under bridge lock */ void br_received_tcn_bpdu(struct net_bridge_port *p) { + p->stp_xstats.rx_tcn++; + if (br_is_designated_port(p)) { br_info(p->br, "port %u(%s) received tcn bpdu\n", (unsigned int) p->port_no, p->dev->name); diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c index 7796dd9d42d7..0e4572f31330 100644 --- a/net/bridge/br_stp_bpdu.c +++ b/net/bridge/br_stp_bpdu.c @@ -118,6 +118,8 @@ void br_send_config_bpdu(struct net_bridge_port *p, struct br_config_bpdu *bpdu) br_set_ticks(buf+33, bpdu->forward_delay); br_send_bpdu(p, buf, 35); + + p->stp_xstats.tx_bpdu++; } /* called under bridge lock */ @@ -133,6 +135,8 @@ void br_send_tcn_bpdu(struct net_bridge_port *p) buf[2] = 0; buf[3] = BPDU_TYPE_TCN; br_send_bpdu(p, buf, 4); + + p->stp_xstats.tx_tcn++; } /* -- cgit v1.2.3-71-gd317 From 166750bc1dd256b2184b22588fb9fe6d3fbb93ae Mon Sep 17 00:00:00 2001 From: Andrii Nakryiko Date: Fri, 13 Dec 2019 17:47:08 -0800 Subject: libbpf: Support libbpf-provided extern variables Add support for extern variables, provided to BPF program by libbpf. Currently the following extern variables are supported: - LINUX_KERNEL_VERSION; version of a kernel in which BPF program is executing, follows KERNEL_VERSION() macro convention, can be 4- and 8-byte long; - CONFIG_xxx values; a set of values of actual kernel config. Tristate, boolean, strings, and integer values are supported. Set of possible values is determined by declared type of extern variable. Supported types of variables are: - Tristate values. Are represented as `enum libbpf_tristate`. Accepted values are **strictly** 'y', 'n', or 'm', which are represented as TRI_YES, TRI_NO, or TRI_MODULE, respectively. - Boolean values. Are represented as bool (_Bool) types. Accepted values are 'y' and 'n' only, turning into true/false values, respectively. - Single-character values. Can be used both as a substritute for bool/tristate, or as a small-range integer: - 'y'/'n'/'m' are represented as is, as characters 'y', 'n', or 'm'; - integers in a range [-128, 127] or [0, 255] (depending on signedness of char in target architecture) are recognized and represented with respective values of char type. - Strings. String values are declared as fixed-length char arrays. String of up to that length will be accepted and put in first N bytes of char array, with the rest of bytes zeroed out. If config string value is longer than space alloted, it will be truncated and warning message emitted. Char array is always zero terminated. String literals in config have to be enclosed in double quotes, just like C-style string literals. - Integers. 8-, 16-, 32-, and 64-bit integers are supported, both signed and unsigned variants. Libbpf enforces parsed config value to be in the supported range of corresponding integer type. Integers values in config can be: - decimal integers, with optional + and - signs; - hexadecimal integers, prefixed with 0x or 0X; - octal integers, starting with 0. Config file itself is searched in /boot/config-$(uname -r) location with fallback to /proc/config.gz, unless config path is specified explicitly through bpf_object_open_opts' kernel_config_path option. Both gzipped and plain text formats are supported. Libbpf adds explicit dependency on zlib because of this, but this shouldn't be a problem, given libelf already depends on zlib. All detected extern variables, are put into a separate .extern internal map. It, similarly to .rodata map, is marked as read-only from BPF program side, as well as is frozen on load. This allows BPF verifier to track extern values as constants and perform enhanced branch prediction and dead code elimination. This can be relied upon for doing kernel version/feature detection and using potentially unsupported field relocations or BPF helpers in a CO-RE-based BPF program, while still having a single version of BPF program running on old and new kernels. Selftests are validating this explicitly for unexisting BPF helper. Signed-off-by: Andrii Nakryiko Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20191214014710.3449601-3-andriin@fb.com --- include/uapi/linux/btf.h | 3 +- tools/include/uapi/linux/btf.h | 7 +- tools/lib/bpf/Makefile | 15 +- tools/lib/bpf/bpf_helpers.h | 9 + tools/lib/bpf/btf.c | 9 +- tools/lib/bpf/libbpf.c | 738 ++++++++++++++++++++++++++++++++--- tools/lib/bpf/libbpf.h | 12 +- tools/testing/selftests/bpf/Makefile | 2 +- 8 files changed, 729 insertions(+), 66 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/btf.h b/include/uapi/linux/btf.h index c02dec97e1ce..1a2898c482ee 100644 --- a/include/uapi/linux/btf.h +++ b/include/uapi/linux/btf.h @@ -142,7 +142,8 @@ struct btf_param { enum { BTF_VAR_STATIC = 0, - BTF_VAR_GLOBAL_ALLOCATED, + BTF_VAR_GLOBAL_ALLOCATED = 1, + BTF_VAR_GLOBAL_EXTERN = 2, }; /* BTF_KIND_VAR is followed by a single "struct btf_var" to describe diff --git a/tools/include/uapi/linux/btf.h b/tools/include/uapi/linux/btf.h index 63ae4a39e58b..1a2898c482ee 100644 --- a/tools/include/uapi/linux/btf.h +++ b/tools/include/uapi/linux/btf.h @@ -22,9 +22,9 @@ struct btf_header { }; /* Max # of type identifier */ -#define BTF_MAX_TYPE 0x0000ffff +#define BTF_MAX_TYPE 0x000fffff /* Max offset into the string section */ -#define BTF_MAX_NAME_OFFSET 0x0000ffff +#define BTF_MAX_NAME_OFFSET 0x00ffffff /* Max # of struct/union/enum members or func args */ #define BTF_MAX_VLEN 0xffff @@ -142,7 +142,8 @@ struct btf_param { enum { BTF_VAR_STATIC = 0, - BTF_VAR_GLOBAL_ALLOCATED, + BTF_VAR_GLOBAL_ALLOCATED = 1, + BTF_VAR_GLOBAL_EXTERN = 2, }; /* BTF_KIND_VAR is followed by a single "struct btf_var" to describe diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile index 23ae06c43d08..a3718cb275f2 100644 --- a/tools/lib/bpf/Makefile +++ b/tools/lib/bpf/Makefile @@ -56,8 +56,8 @@ ifndef VERBOSE endif FEATURE_USER = .libbpf -FEATURE_TESTS = libelf libelf-mmap bpf reallocarray -FEATURE_DISPLAY = libelf bpf +FEATURE_TESTS = libelf libelf-mmap zlib bpf reallocarray +FEATURE_DISPLAY = libelf zlib bpf INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES) @@ -160,7 +160,7 @@ all: fixdep all_cmd: $(CMD_TARGETS) check -$(BPF_IN_SHARED): force elfdep bpfdep bpf_helper_defs.h +$(BPF_IN_SHARED): force elfdep zdep bpfdep bpf_helper_defs.h @(test -f ../../include/uapi/linux/bpf.h -a -f ../../../include/uapi/linux/bpf.h && ( \ (diff -B ../../include/uapi/linux/bpf.h ../../../include/uapi/linux/bpf.h >/dev/null) || \ echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/bpf.h' differs from latest version at 'include/uapi/linux/bpf.h'" >&2 )) || true @@ -178,7 +178,7 @@ $(BPF_IN_SHARED): force elfdep bpfdep bpf_helper_defs.h echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/if_xdp.h' differs from latest version at 'include/uapi/linux/if_xdp.h'" >&2 )) || true $(Q)$(MAKE) $(build)=libbpf OUTPUT=$(SHARED_OBJDIR) CFLAGS="$(CFLAGS) $(SHLIB_FLAGS)" -$(BPF_IN_STATIC): force elfdep bpfdep bpf_helper_defs.h +$(BPF_IN_STATIC): force elfdep zdep bpfdep bpf_helper_defs.h $(Q)$(MAKE) $(build)=libbpf OUTPUT=$(STATIC_OBJDIR) bpf_helper_defs.h: $(srctree)/tools/include/uapi/linux/bpf.h @@ -190,7 +190,7 @@ $(OUTPUT)libbpf.so: $(OUTPUT)libbpf.so.$(LIBBPF_VERSION) $(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED) $(QUIET_LINK)$(CC) $(LDFLAGS) \ --shared -Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \ - -Wl,--version-script=$(VERSION_SCRIPT) $^ -lelf -o $@ + -Wl,--version-script=$(VERSION_SCRIPT) $^ -lelf -lz -o $@ @ln -sf $(@F) $(OUTPUT)libbpf.so @ln -sf $(@F) $(OUTPUT)libbpf.so.$(LIBBPF_MAJOR_VERSION) @@ -279,12 +279,15 @@ clean: -PHONY += force elfdep bpfdep cscope tags +PHONY += force elfdep zdep bpfdep cscope tags force: elfdep: @if [ "$(feature-libelf)" != "1" ]; then echo "No libelf found"; exit 1 ; fi +zdep: + @if [ "$(feature-zlib)" != "1" ]; then echo "No zlib found"; exit 1 ; fi + bpfdep: @if [ "$(feature-bpf)" != "1" ]; then echo "BPF API too old"; exit 1 ; fi diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index 0c7d28292898..aa46700075e1 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -25,6 +25,9 @@ #ifndef __always_inline #define __always_inline __attribute__((always_inline)) #endif +#ifndef __weak +#define __weak __attribute__((weak)) +#endif /* * Helper structure used by eBPF C program @@ -44,4 +47,10 @@ enum libbpf_pin_type { LIBBPF_PIN_BY_NAME, }; +enum libbpf_tristate { + TRI_NO = 0, + TRI_YES = 1, + TRI_MODULE = 2, +}; + #endif diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c index 84fe82f27bef..520021939d81 100644 --- a/tools/lib/bpf/btf.c +++ b/tools/lib/bpf/btf.c @@ -578,6 +578,12 @@ static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf, return -ENOENT; } + /* .extern datasec size and var offsets were set correctly during + * extern collection step, so just skip straight to sorting variables + */ + if (t->size) + goto sort_vars; + ret = bpf_object__section_size(obj, name, &size); if (ret || !size || (t->size && t->size != size)) { pr_debug("Invalid size for section %s: %u bytes\n", name, size); @@ -614,7 +620,8 @@ static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf, vsi->offset = off; } - qsort(t + 1, vars, sizeof(*vsi), compare_vsi_off); +sort_vars: + qsort(btf_var_secinfos(t), vars, sizeof(*vsi), compare_vsi_off); return 0; } diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index f6c135869701..26144706ba0b 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -44,6 +44,7 @@ #include #include #include +#include #include "libbpf.h" #include "bpf.h" @@ -139,6 +140,20 @@ struct bpf_capabilities { __u32 array_mmap:1; }; +enum reloc_type { + RELO_LD64, + RELO_CALL, + RELO_DATA, + RELO_EXTERN, +}; + +struct reloc_desc { + enum reloc_type type; + int insn_idx; + int map_idx; + int sym_off; +}; + /* * bpf_prog should be a better name but it has been used in * linux/filter.h. @@ -157,16 +172,7 @@ struct bpf_program { size_t insns_cnt, main_prog_cnt; enum bpf_prog_type type; - struct reloc_desc { - enum { - RELO_LD64, - RELO_CALL, - RELO_DATA, - } type; - int insn_idx; - int map_idx; - int sym_off; - } *reloc_desc; + struct reloc_desc *reloc_desc; int nr_reloc; int log_level; @@ -198,18 +204,21 @@ struct bpf_program { #define DATA_SEC ".data" #define BSS_SEC ".bss" #define RODATA_SEC ".rodata" +#define EXTERN_SEC ".extern" enum libbpf_map_type { LIBBPF_MAP_UNSPEC, LIBBPF_MAP_DATA, LIBBPF_MAP_BSS, LIBBPF_MAP_RODATA, + LIBBPF_MAP_EXTERN, }; static const char * const libbpf_type_to_btf_name[] = { [LIBBPF_MAP_DATA] = DATA_SEC, [LIBBPF_MAP_BSS] = BSS_SEC, [LIBBPF_MAP_RODATA] = RODATA_SEC, + [LIBBPF_MAP_EXTERN] = EXTERN_SEC, }; struct bpf_map { @@ -231,6 +240,28 @@ struct bpf_map { bool reused; }; +enum extern_type { + EXT_UNKNOWN, + EXT_CHAR, + EXT_BOOL, + EXT_INT, + EXT_TRISTATE, + EXT_CHAR_ARR, +}; + +struct extern_desc { + const char *name; + int sym_idx; + int btf_id; + enum extern_type type; + int sz; + int align; + int data_off; + bool is_signed; + bool is_weak; + bool is_set; +}; + static LIST_HEAD(bpf_objects_list); struct bpf_object { @@ -244,6 +275,11 @@ struct bpf_object { size_t nr_maps; size_t maps_cap; + char *kconfig_path; + struct extern_desc *externs; + int nr_extern; + int extern_map_idx; + bool loaded; bool has_pseudo_calls; bool relaxed_core_relocs; @@ -271,6 +307,7 @@ struct bpf_object { int maps_shndx; int btf_maps_shndx; int text_shndx; + int symbols_shndx; int data_shndx; int rodata_shndx; int bss_shndx; @@ -542,6 +579,7 @@ static struct bpf_object *bpf_object__new(const char *path, obj->efile.data_shndx = -1; obj->efile.rodata_shndx = -1; obj->efile.bss_shndx = -1; + obj->extern_map_idx = -1; obj->kern_version = get_kernel_version(); obj->loaded = false; @@ -866,7 +904,8 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type, def->key_size = sizeof(int); def->value_size = data_sz; def->max_entries = 1; - def->map_flags = type == LIBBPF_MAP_RODATA ? BPF_F_RDONLY_PROG : 0; + def->map_flags = type == LIBBPF_MAP_RODATA || type == LIBBPF_MAP_EXTERN + ? BPF_F_RDONLY_PROG : 0; def->map_flags |= BPF_F_MMAPABLE; pr_debug("map '%s' (global data): at sec_idx %d, offset %zu, flags %x.\n", @@ -883,7 +922,7 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type, return err; } - if (type != LIBBPF_MAP_BSS) + if (data) memcpy(map->mmaped, data, data_sz); pr_debug("map %td is \"%s\"\n", map - obj->maps, map->name); @@ -924,6 +963,271 @@ static int bpf_object__init_global_data_maps(struct bpf_object *obj) return 0; } + +static struct extern_desc *find_extern_by_name(const struct bpf_object *obj, + const void *name) +{ + int i; + + for (i = 0; i < obj->nr_extern; i++) { + if (strcmp(obj->externs[i].name, name) == 0) + return &obj->externs[i]; + } + return NULL; +} + +static int set_ext_value_tri(struct extern_desc *ext, void *ext_val, + char value) +{ + switch (ext->type) { + case EXT_BOOL: + if (value == 'm') { + pr_warn("extern %s=%c should be tristate or char\n", + ext->name, value); + return -EINVAL; + } + *(bool *)ext_val = value == 'y' ? true : false; + break; + case EXT_TRISTATE: + if (value == 'y') + *(enum libbpf_tristate *)ext_val = TRI_YES; + else if (value == 'm') + *(enum libbpf_tristate *)ext_val = TRI_MODULE; + else /* value == 'n' */ + *(enum libbpf_tristate *)ext_val = TRI_NO; + break; + case EXT_CHAR: + *(char *)ext_val = value; + break; + case EXT_UNKNOWN: + case EXT_INT: + case EXT_CHAR_ARR: + default: + pr_warn("extern %s=%c should be bool, tristate, or char\n", + ext->name, value); + return -EINVAL; + } + ext->is_set = true; + return 0; +} + +static int set_ext_value_str(struct extern_desc *ext, char *ext_val, + const char *value) +{ + size_t len; + + if (ext->type != EXT_CHAR_ARR) { + pr_warn("extern %s=%s should char array\n", ext->name, value); + return -EINVAL; + } + + len = strlen(value); + if (value[len - 1] != '"') { + pr_warn("extern '%s': invalid string config '%s'\n", + ext->name, value); + return -EINVAL; + } + + /* strip quotes */ + len -= 2; + if (len >= ext->sz) { + pr_warn("extern '%s': long string config %s of (%zu bytes) truncated to %d bytes\n", + ext->name, value, len, ext->sz - 1); + len = ext->sz - 1; + } + memcpy(ext_val, value + 1, len); + ext_val[len] = '\0'; + ext->is_set = true; + return 0; +} + +static int parse_u64(const char *value, __u64 *res) +{ + char *value_end; + int err; + + errno = 0; + *res = strtoull(value, &value_end, 0); + if (errno) { + err = -errno; + pr_warn("failed to parse '%s' as integer: %d\n", value, err); + return err; + } + if (*value_end) { + pr_warn("failed to parse '%s' as integer completely\n", value); + return -EINVAL; + } + return 0; +} + +static bool is_ext_value_in_range(const struct extern_desc *ext, __u64 v) +{ + int bit_sz = ext->sz * 8; + + if (ext->sz == 8) + return true; + + /* Validate that value stored in u64 fits in integer of `ext->sz` + * bytes size without any loss of information. If the target integer + * is signed, we rely on the following limits of integer type of + * Y bits and subsequent transformation: + * + * -2^(Y-1) <= X <= 2^(Y-1) - 1 + * 0 <= X + 2^(Y-1) <= 2^Y - 1 + * 0 <= X + 2^(Y-1) < 2^Y + * + * For unsigned target integer, check that all the (64 - Y) bits are + * zero. + */ + if (ext->is_signed) + return v + (1ULL << (bit_sz - 1)) < (1ULL << bit_sz); + else + return (v >> bit_sz) == 0; +} + +static int set_ext_value_num(struct extern_desc *ext, void *ext_val, + __u64 value) +{ + if (ext->type != EXT_INT && ext->type != EXT_CHAR) { + pr_warn("extern %s=%llu should be integer\n", + ext->name, value); + return -EINVAL; + } + if (!is_ext_value_in_range(ext, value)) { + pr_warn("extern %s=%llu value doesn't fit in %d bytes\n", + ext->name, value, ext->sz); + return -ERANGE; + } + switch (ext->sz) { + case 1: *(__u8 *)ext_val = value; break; + case 2: *(__u16 *)ext_val = value; break; + case 4: *(__u32 *)ext_val = value; break; + case 8: *(__u64 *)ext_val = value; break; + default: + return -EINVAL; + } + ext->is_set = true; + return 0; +} + +static int bpf_object__read_kernel_config(struct bpf_object *obj, + const char *config_path, + void *data) +{ + char buf[PATH_MAX], *sep, *value; + struct extern_desc *ext; + int len, err = 0; + void *ext_val; + __u64 num; + gzFile file; + + if (config_path) { + file = gzopen(config_path, "r"); + } else { + struct utsname uts; + + uname(&uts); + len = snprintf(buf, PATH_MAX, "/boot/config-%s", uts.release); + if (len < 0) + return -EINVAL; + else if (len >= PATH_MAX) + return -ENAMETOOLONG; + /* gzopen also accepts uncompressed files. */ + file = gzopen(buf, "r"); + if (!file) + file = gzopen("/proc/config.gz", "r"); + } + if (!file) { + pr_warn("failed to read kernel config at '%s'\n", config_path); + return -ENOENT; + } + + while (gzgets(file, buf, sizeof(buf))) { + if (strncmp(buf, "CONFIG_", 7)) + continue; + + sep = strchr(buf, '='); + if (!sep) { + err = -EINVAL; + pr_warn("failed to parse '%s': no separator\n", buf); + goto out; + } + /* Trim ending '\n' */ + len = strlen(buf); + if (buf[len - 1] == '\n') + buf[len - 1] = '\0'; + /* Split on '=' and ensure that a value is present. */ + *sep = '\0'; + if (!sep[1]) { + err = -EINVAL; + *sep = '='; + pr_warn("failed to parse '%s': no value\n", buf); + goto out; + } + + ext = find_extern_by_name(obj, buf); + if (!ext) + continue; + if (ext->is_set) { + err = -EINVAL; + pr_warn("re-defining extern '%s' not allowed\n", buf); + goto out; + } + + ext_val = data + ext->data_off; + value = sep + 1; + + switch (*value) { + case 'y': case 'n': case 'm': + err = set_ext_value_tri(ext, ext_val, *value); + break; + case '"': + err = set_ext_value_str(ext, ext_val, value); + break; + default: + /* assume integer */ + err = parse_u64(value, &num); + if (err) { + pr_warn("extern %s=%s should be integer\n", + ext->name, value); + goto out; + } + err = set_ext_value_num(ext, ext_val, num); + break; + } + if (err) + goto out; + pr_debug("extern %s=%s\n", ext->name, value); + } + +out: + gzclose(file); + return err; +} + +static int bpf_object__init_extern_map(struct bpf_object *obj) +{ + struct extern_desc *last_ext; + size_t map_sz; + int err; + + if (obj->nr_extern == 0) + return 0; + + last_ext = &obj->externs[obj->nr_extern - 1]; + map_sz = last_ext->data_off + last_ext->sz; + + err = bpf_object__init_internal_map(obj, LIBBPF_MAP_EXTERN, + obj->efile.symbols_shndx, + NULL, map_sz); + if (err) + return err; + + obj->extern_map_idx = obj->nr_maps - 1; + + return 0; +} + static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict) { Elf_Data *symbols = obj->efile.symbols; @@ -1401,19 +1705,17 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict, static int bpf_object__init_maps(struct bpf_object *obj, const struct bpf_object_open_opts *opts) { - const char *pin_root_path = OPTS_GET(opts, pin_root_path, NULL); - bool strict = !OPTS_GET(opts, relaxed_maps, false); + const char *pin_root_path; + bool strict; int err; - err = bpf_object__init_user_maps(obj, strict); - if (err) - return err; + strict = !OPTS_GET(opts, relaxed_maps, false); + pin_root_path = OPTS_GET(opts, pin_root_path, NULL); - err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path); - if (err) - return err; - - err = bpf_object__init_global_data_maps(obj); + err = bpf_object__init_user_maps(obj, strict); + err = err ?: bpf_object__init_user_btf_maps(obj, strict, pin_root_path); + err = err ?: bpf_object__init_global_data_maps(obj); + err = err ?: bpf_object__init_extern_map(obj); if (err) return err; @@ -1532,11 +1834,6 @@ static int bpf_object__init_btf(struct bpf_object *obj, BTF_ELF_SEC, err); goto out; } - err = btf__finalize_data(obj, obj->btf); - if (err) { - pr_warn("Error finalizing %s: %d.\n", BTF_ELF_SEC, err); - goto out; - } } if (btf_ext_data) { if (!obj->btf) { @@ -1570,6 +1867,30 @@ out: return 0; } +static int bpf_object__finalize_btf(struct bpf_object *obj) +{ + int err; + + if (!obj->btf) + return 0; + + err = btf__finalize_data(obj, obj->btf); + if (!err) + return 0; + + pr_warn("Error finalizing %s: %d.\n", BTF_ELF_SEC, err); + btf__free(obj->btf); + obj->btf = NULL; + btf_ext__free(obj->btf_ext); + obj->btf_ext = NULL; + + if (bpf_object__is_btf_mandatory(obj)) { + pr_warn("BTF is required, but is missing or corrupted.\n"); + return -ENOENT; + } + return 0; +} + static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) { int err = 0; @@ -1670,6 +1991,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj) return -LIBBPF_ERRNO__FORMAT; } obj->efile.symbols = data; + obj->efile.symbols_shndx = idx; obj->efile.strtabidx = sh.sh_link; } else if (sh.sh_type == SHT_PROGBITS && data->d_size > 0) { if (sh.sh_flags & SHF_EXECINSTR) { @@ -1737,6 +2059,216 @@ static int bpf_object__elf_collect(struct bpf_object *obj) return bpf_object__init_btf(obj, btf_data, btf_ext_data); } +static bool sym_is_extern(const GElf_Sym *sym) +{ + int bind = GELF_ST_BIND(sym->st_info); + /* externs are symbols w/ type=NOTYPE, bind=GLOBAL|WEAK, section=UND */ + return sym->st_shndx == SHN_UNDEF && + (bind == STB_GLOBAL || bind == STB_WEAK) && + GELF_ST_TYPE(sym->st_info) == STT_NOTYPE; +} + +static int find_extern_btf_id(const struct btf *btf, const char *ext_name) +{ + const struct btf_type *t; + const char *var_name; + int i, n; + + if (!btf) + return -ESRCH; + + n = btf__get_nr_types(btf); + for (i = 1; i <= n; i++) { + t = btf__type_by_id(btf, i); + + if (!btf_is_var(t)) + continue; + + var_name = btf__name_by_offset(btf, t->name_off); + if (strcmp(var_name, ext_name)) + continue; + + if (btf_var(t)->linkage != BTF_VAR_GLOBAL_EXTERN) + return -EINVAL; + + return i; + } + + return -ENOENT; +} + +static enum extern_type find_extern_type(const struct btf *btf, int id, + bool *is_signed) +{ + const struct btf_type *t; + const char *name; + + t = skip_mods_and_typedefs(btf, id, NULL); + name = btf__name_by_offset(btf, t->name_off); + + if (is_signed) + *is_signed = false; + switch (btf_kind(t)) { + case BTF_KIND_INT: { + int enc = btf_int_encoding(t); + + if (enc & BTF_INT_BOOL) + return t->size == 1 ? EXT_BOOL : EXT_UNKNOWN; + if (is_signed) + *is_signed = enc & BTF_INT_SIGNED; + if (t->size == 1) + return EXT_CHAR; + if (t->size < 1 || t->size > 8 || (t->size & (t->size - 1))) + return EXT_UNKNOWN; + return EXT_INT; + } + case BTF_KIND_ENUM: + if (t->size != 4) + return EXT_UNKNOWN; + if (strcmp(name, "libbpf_tristate")) + return EXT_UNKNOWN; + return EXT_TRISTATE; + case BTF_KIND_ARRAY: + if (btf_array(t)->nelems == 0) + return EXT_UNKNOWN; + if (find_extern_type(btf, btf_array(t)->type, NULL) != EXT_CHAR) + return EXT_UNKNOWN; + return EXT_CHAR_ARR; + default: + return EXT_UNKNOWN; + } +} + +static int cmp_externs(const void *_a, const void *_b) +{ + const struct extern_desc *a = _a; + const struct extern_desc *b = _b; + + /* descending order by alignment requirements */ + if (a->align != b->align) + return a->align > b->align ? -1 : 1; + /* ascending order by size, within same alignment class */ + if (a->sz != b->sz) + return a->sz < b->sz ? -1 : 1; + /* resolve ties by name */ + return strcmp(a->name, b->name); +} + +static int bpf_object__collect_externs(struct bpf_object *obj) +{ + const struct btf_type *t; + struct extern_desc *ext; + int i, n, off, btf_id; + struct btf_type *sec; + const char *ext_name; + Elf_Scn *scn; + GElf_Shdr sh; + + if (!obj->efile.symbols) + return 0; + + scn = elf_getscn(obj->efile.elf, obj->efile.symbols_shndx); + if (!scn) + return -LIBBPF_ERRNO__FORMAT; + if (gelf_getshdr(scn, &sh) != &sh) + return -LIBBPF_ERRNO__FORMAT; + n = sh.sh_size / sh.sh_entsize; + + pr_debug("looking for externs among %d symbols...\n", n); + for (i = 0; i < n; i++) { + GElf_Sym sym; + + if (!gelf_getsym(obj->efile.symbols, i, &sym)) + return -LIBBPF_ERRNO__FORMAT; + if (!sym_is_extern(&sym)) + continue; + ext_name = elf_strptr(obj->efile.elf, obj->efile.strtabidx, + sym.st_name); + if (!ext_name || !ext_name[0]) + continue; + + ext = obj->externs; + ext = reallocarray(ext, obj->nr_extern + 1, sizeof(*ext)); + if (!ext) + return -ENOMEM; + obj->externs = ext; + ext = &ext[obj->nr_extern]; + memset(ext, 0, sizeof(*ext)); + obj->nr_extern++; + + ext->btf_id = find_extern_btf_id(obj->btf, ext_name); + if (ext->btf_id <= 0) { + pr_warn("failed to find BTF for extern '%s': %d\n", + ext_name, ext->btf_id); + return ext->btf_id; + } + t = btf__type_by_id(obj->btf, ext->btf_id); + ext->name = btf__name_by_offset(obj->btf, t->name_off); + ext->sym_idx = i; + ext->is_weak = GELF_ST_BIND(sym.st_info) == STB_WEAK; + ext->sz = btf__resolve_size(obj->btf, t->type); + if (ext->sz <= 0) { + pr_warn("failed to resolve size of extern '%s': %d\n", + ext_name, ext->sz); + return ext->sz; + } + ext->align = btf__align_of(obj->btf, t->type); + if (ext->align <= 0) { + pr_warn("failed to determine alignment of extern '%s': %d\n", + ext_name, ext->align); + return -EINVAL; + } + ext->type = find_extern_type(obj->btf, t->type, + &ext->is_signed); + if (ext->type == EXT_UNKNOWN) { + pr_warn("extern '%s' type is unsupported\n", ext_name); + return -ENOTSUP; + } + } + pr_debug("collected %d externs total\n", obj->nr_extern); + + if (!obj->nr_extern) + return 0; + + /* sort externs by (alignment, size, name) and calculate their offsets + * within a map */ + qsort(obj->externs, obj->nr_extern, sizeof(*ext), cmp_externs); + off = 0; + for (i = 0; i < obj->nr_extern; i++) { + ext = &obj->externs[i]; + ext->data_off = roundup(off, ext->align); + off = ext->data_off + ext->sz; + pr_debug("extern #%d: symbol %d, off %u, name %s\n", + i, ext->sym_idx, ext->data_off, ext->name); + } + + btf_id = btf__find_by_name(obj->btf, EXTERN_SEC); + if (btf_id <= 0) { + pr_warn("no BTF info found for '%s' datasec\n", EXTERN_SEC); + return -ESRCH; + } + + sec = (struct btf_type *)btf__type_by_id(obj->btf, btf_id); + sec->size = off; + n = btf_vlen(sec); + for (i = 0; i < n; i++) { + struct btf_var_secinfo *vs = btf_var_secinfos(sec) + i; + + t = btf__type_by_id(obj->btf, vs->type); + ext_name = btf__name_by_offset(obj->btf, t->name_off); + ext = find_extern_by_name(obj, ext_name); + if (!ext) { + pr_warn("failed to find extern definition for BTF var '%s'\n", + ext_name); + return -ESRCH; + } + vs->offset = ext->data_off; + btf_var(t)->linkage = BTF_VAR_GLOBAL_ALLOCATED; + } + + return 0; +} + static struct bpf_program * bpf_object__find_prog_by_idx(struct bpf_object *obj, int idx) { @@ -1801,6 +2333,8 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx) return LIBBPF_MAP_BSS; else if (shndx == obj->efile.rodata_shndx) return LIBBPF_MAP_RODATA; + else if (shndx == obj->efile.symbols_shndx) + return LIBBPF_MAP_EXTERN; else return LIBBPF_MAP_UNSPEC; } @@ -1845,6 +2379,30 @@ static int bpf_program__record_reloc(struct bpf_program *prog, insn_idx, insn->code); return -LIBBPF_ERRNO__RELOC; } + + if (sym_is_extern(sym)) { + int sym_idx = GELF_R_SYM(rel->r_info); + int i, n = obj->nr_extern; + struct extern_desc *ext; + + for (i = 0; i < n; i++) { + ext = &obj->externs[i]; + if (ext->sym_idx == sym_idx) + break; + } + if (i >= n) { + pr_warn("extern relo failed to find extern for sym %d\n", + sym_idx); + return -LIBBPF_ERRNO__RELOC; + } + pr_debug("found extern #%d '%s' (sym %d, off %u) for insn %u\n", + i, ext->name, ext->sym_idx, ext->data_off, insn_idx); + reloc_desc->type = RELO_EXTERN; + reloc_desc->insn_idx = insn_idx; + reloc_desc->sym_off = ext->data_off; + return 0; + } + if (!shdr_idx || shdr_idx >= SHN_LORESERVE) { pr_warn("invalid relo for \'%s\' in special section 0x%x; forgot to initialize global var?..\n", name, shdr_idx); @@ -2306,11 +2864,12 @@ bpf_object__reuse_map(struct bpf_map *map) static int bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) { + enum libbpf_map_type map_type = map->libbpf_type; char *cp, errmsg[STRERR_BUFSIZE]; int err, zero = 0; - /* Nothing to do here since kernel already zero-initializes .bss map. */ - if (map->libbpf_type == LIBBPF_MAP_BSS) + /* kernel already zero-initializes .bss map. */ + if (map_type == LIBBPF_MAP_BSS) return 0; err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0); @@ -2322,8 +2881,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) return err; } - /* Freeze .rodata map as read-only from syscall side. */ - if (map->libbpf_type == LIBBPF_MAP_RODATA) { + /* Freeze .rodata and .extern map as read-only from syscall side. */ + if (map_type == LIBBPF_MAP_RODATA || map_type == LIBBPF_MAP_EXTERN) { err = bpf_map_freeze(map->fd); if (err) { err = -errno; @@ -3572,9 +4131,6 @@ bpf_program__reloc_text(struct bpf_program *prog, struct bpf_object *obj, size_t new_cnt; int err; - if (relo->type != RELO_CALL) - return -LIBBPF_ERRNO__RELOC; - if (prog->idx == obj->efile.text_shndx) { pr_warn("relo in .text insn %d into off %d (insn #%d)\n", relo->insn_idx, relo->sym_off, relo->sym_off / 8); @@ -3636,27 +4192,37 @@ bpf_program__relocate(struct bpf_program *prog, struct bpf_object *obj) for (i = 0; i < prog->nr_reloc; i++) { struct reloc_desc *relo = &prog->reloc_desc[i]; + struct bpf_insn *insn = &prog->insns[relo->insn_idx]; - if (relo->type == RELO_LD64 || relo->type == RELO_DATA) { - struct bpf_insn *insn = &prog->insns[relo->insn_idx]; - - if (relo->insn_idx + 1 >= (int)prog->insns_cnt) { - pr_warn("relocation out of range: '%s'\n", - prog->section_name); - return -LIBBPF_ERRNO__RELOC; - } + if (relo->insn_idx + 1 >= (int)prog->insns_cnt) { + pr_warn("relocation out of range: '%s'\n", + prog->section_name); + return -LIBBPF_ERRNO__RELOC; + } - if (relo->type != RELO_DATA) { - insn[0].src_reg = BPF_PSEUDO_MAP_FD; - } else { - insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; - insn[1].imm = insn[0].imm + relo->sym_off; - } + switch (relo->type) { + case RELO_LD64: + insn[0].src_reg = BPF_PSEUDO_MAP_FD; + insn[0].imm = obj->maps[relo->map_idx].fd; + break; + case RELO_DATA: + insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; + insn[1].imm = insn[0].imm + relo->sym_off; insn[0].imm = obj->maps[relo->map_idx].fd; - } else if (relo->type == RELO_CALL) { + break; + case RELO_EXTERN: + insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; + insn[0].imm = obj->maps[obj->extern_map_idx].fd; + insn[1].imm = relo->sym_off; + break; + case RELO_CALL: err = bpf_program__reloc_text(prog, obj, relo); if (err) return err; + break; + default: + pr_warn("relo #%d: bad relo type %d\n", i, relo->type); + return -EINVAL; } } @@ -3936,11 +4502,10 @@ static struct bpf_object * __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz, const struct bpf_object_open_opts *opts) { + const char *obj_name, *kconfig_path; struct bpf_program *prog; struct bpf_object *obj; - const char *obj_name; char tmp_name[64]; - __u32 attach_prog_fd; int err; if (elf_version(EV_CURRENT) == EV_NONE) { @@ -3969,11 +4534,18 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz, return obj; obj->relaxed_core_relocs = OPTS_GET(opts, relaxed_core_relocs, false); - attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0); + kconfig_path = OPTS_GET(opts, kconfig_path, NULL); + if (kconfig_path) { + obj->kconfig_path = strdup(kconfig_path); + if (!obj->kconfig_path) + return ERR_PTR(-ENOMEM); + } err = bpf_object__elf_init(obj); err = err ? : bpf_object__check_endianness(obj); err = err ? : bpf_object__elf_collect(obj); + err = err ? : bpf_object__collect_externs(obj); + err = err ? : bpf_object__finalize_btf(obj); err = err ? : bpf_object__init_maps(obj, opts); err = err ? : bpf_object__init_prog_names(obj); err = err ? : bpf_object__collect_reloc(obj); @@ -3996,7 +4568,7 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz, bpf_program__set_type(prog, prog_type); bpf_program__set_expected_attach_type(prog, attach_type); if (prog_type == BPF_PROG_TYPE_TRACING) - prog->attach_prog_fd = attach_prog_fd; + prog->attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0); } return obj; @@ -4107,6 +4679,61 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj) return 0; } +static int bpf_object__resolve_externs(struct bpf_object *obj, + const char *config_path) +{ + bool need_config = false; + struct extern_desc *ext; + int err, i; + void *data; + + if (obj->nr_extern == 0) + return 0; + + data = obj->maps[obj->extern_map_idx].mmaped; + + for (i = 0; i < obj->nr_extern; i++) { + ext = &obj->externs[i]; + + if (strcmp(ext->name, "LINUX_KERNEL_VERSION") == 0) { + void *ext_val = data + ext->data_off; + __u32 kver = get_kernel_version(); + + if (!kver) { + pr_warn("failed to get kernel version\n"); + return -EINVAL; + } + err = set_ext_value_num(ext, ext_val, kver); + if (err) + return err; + pr_debug("extern %s=0x%x\n", ext->name, kver); + } else if (strncmp(ext->name, "CONFIG_", 7) == 0) { + need_config = true; + } else { + pr_warn("unrecognized extern '%s'\n", ext->name); + return -EINVAL; + } + } + if (need_config) { + err = bpf_object__read_kernel_config(obj, config_path, data); + if (err) + return -EINVAL; + } + for (i = 0; i < obj->nr_extern; i++) { + ext = &obj->externs[i]; + + if (!ext->is_set && !ext->is_weak) { + pr_warn("extern %s (strong) not resolved\n", ext->name); + return -ESRCH; + } else if (!ext->is_set) { + pr_debug("extern %s (weak) not resolved, defaulting to zero\n", + ext->name); + } + } + + return 0; +} + int bpf_object__load_xattr(struct bpf_object_load_attr *attr) { struct bpf_object *obj; @@ -4126,6 +4753,7 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr) obj->loaded = true; err = bpf_object__probe_caps(obj); + err = err ? : bpf_object__resolve_externs(obj, obj->kconfig_path); err = err ? : bpf_object__sanitize_and_load_btf(obj); err = err ? : bpf_object__sanitize_maps(obj); err = err ? : bpf_object__create_maps(obj); @@ -4719,6 +5347,10 @@ void bpf_object__close(struct bpf_object *obj) zfree(&map->pin_path); } + zfree(&obj->kconfig_path); + zfree(&obj->externs); + obj->nr_extern = 0; + zfree(&obj->maps); obj->nr_maps = 0; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 623191e71415..6340823871e2 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -85,8 +85,12 @@ struct bpf_object_open_opts { */ const char *pin_root_path; __u32 attach_prog_fd; + /* kernel config file path override (for CONFIG_ externs); can point + * to either uncompressed text file or .gz file + */ + const char *kconfig_path; }; -#define bpf_object_open_opts__last_field attach_prog_fd +#define bpf_object_open_opts__last_field kconfig_path LIBBPF_API struct bpf_object *bpf_object__open(const char *path); LIBBPF_API struct bpf_object * @@ -669,6 +673,12 @@ LIBBPF_API int bpf_object__attach_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__detach_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s); +enum libbpf_tristate { + TRI_NO = 0, + TRI_YES = 1, + TRI_MODULE = 2, +}; + #ifdef __cplusplus } /* extern "C" */ #endif diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index f70c8e735120..ba90621617a8 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -24,7 +24,7 @@ CFLAGS += -g -Wall -O2 $(GENFLAGS) -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) \ -I$(GENDIR) -I$(TOOLSINCDIR) -I$(CURDIR) \ -Dbpf_prog_load=bpf_prog_test_load \ -Dbpf_load_program=bpf_test_load_program -LDLIBS += -lcap -lelf -lrt -lpthread +LDLIBS += -lcap -lelf -lz -lrt -lpthread # Order correspond to 'make run_tests' order TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \ -- cgit v1.2.3-71-gd317 From 9429439f59cd3b82a3e2732ead5363578de97a84 Mon Sep 17 00:00:00 2001 From: Yangbo Lu Date: Thu, 12 Dec 2019 18:08:05 +0800 Subject: ptp_qoriq: export extts_clean_up() function Export extts_clean_up() function so that dpaa2-ptp driver is able to reuse it. Signed-off-by: Yangbo Lu Signed-off-by: David S. Miller --- drivers/ptp/ptp_qoriq.c | 4 ++-- include/linux/fsl/ptp_qoriq.h | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c index a3062cd4d174..b27c46ebfc8f 100644 --- a/drivers/ptp/ptp_qoriq.c +++ b/drivers/ptp/ptp_qoriq.c @@ -74,8 +74,7 @@ static void set_fipers(struct ptp_qoriq *ptp_qoriq) ptp_qoriq->write(®s->fiper_regs->tmr_fiper2, ptp_qoriq->tmr_fiper2); } -static int extts_clean_up(struct ptp_qoriq *ptp_qoriq, int index, - bool update_event) +int extts_clean_up(struct ptp_qoriq *ptp_qoriq, int index, bool update_event) { struct ptp_qoriq_registers *regs = &ptp_qoriq->regs; struct ptp_clock_event event; @@ -121,6 +120,7 @@ static int extts_clean_up(struct ptp_qoriq *ptp_qoriq, int index, return 0; } +EXPORT_SYMBOL_GPL(extts_clean_up); /* * Interrupt service routine diff --git a/include/linux/fsl/ptp_qoriq.h b/include/linux/fsl/ptp_qoriq.h index 992bf9fa1729..b0b743563f43 100644 --- a/include/linux/fsl/ptp_qoriq.h +++ b/include/linux/fsl/ptp_qoriq.h @@ -192,6 +192,7 @@ int ptp_qoriq_settime(struct ptp_clock_info *ptp, const struct timespec64 *ts); int ptp_qoriq_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *rq, int on); +int extts_clean_up(struct ptp_qoriq *ptp_qoriq, int index, bool update_event); #ifdef CONFIG_DEBUG_FS void ptp_qoriq_create_debugfs(struct ptp_qoriq *ptp_qoriq); void ptp_qoriq_remove_debugfs(struct ptp_qoriq *ptp_qoriq); -- cgit v1.2.3-71-gd317 From 1f1c1d7c89ee538f3e36b43098e95973f8fa37db Mon Sep 17 00:00:00 2001 From: Sven Eckelmann Date: Fri, 13 Dec 2019 21:24:27 +0100 Subject: ipv6: Annotate bitwise IPv6 dsfield pointer cast The sparse commit 6002ded74587 ("add a flag to warn on casts to/from bitwise pointers") introduced a check for non-direct casts from/to restricted datatypes (when -Wbitwise-pointer is enabled). This triggered a warning in ipv6_get_dsfield() because sparse doesn't know that the buffer already points to some data in the correct bitwise integer format. This was already fixed in ipv6_change_dsfield() by the __force attribute and can be fixed here the same way. Signed-off-by: Sven Eckelmann Signed-off-by: David S. Miller --- include/net/dsfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/dsfield.h b/include/net/dsfield.h index 1a245ee10c95..a59a57ffc546 100644 --- a/include/net/dsfield.h +++ b/include/net/dsfield.h @@ -21,7 +21,7 @@ static inline __u8 ipv4_get_dsfield(const struct iphdr *iph) static inline __u8 ipv6_get_dsfield(const struct ipv6hdr *ipv6h) { - return ntohs(*(const __be16 *)ipv6h) >> 4; + return ntohs(*(__force const __be16 *)ipv6h) >> 4; } -- cgit v1.2.3-71-gd317 From 54e1f08bddbe63a3c0ae44f65df2c8b895003ef4 Mon Sep 17 00:00:00 2001 From: Sven Eckelmann Date: Fri, 13 Dec 2019 21:24:28 +0100 Subject: ipv6: Annotate ipv6_addr_is_* bitwise pointer casts The sparse commit 6002ded74587 ("add a flag to warn on casts to/from bitwise pointers") introduced a check for non-direct casts from/to restricted datatypes (when -Wbitwise-pointer is enabled). This triggered a warning in the 64 bit optimized ipv6_addr_is_*() functions because sparse doesn't know that the buffer already points to some data in the correct bitwise integer format. But these were correct and can therefore be marked with __force to signalize sparse an intended cast to a specific bitwise type. Signed-off-by: Sven Eckelmann Signed-off-by: David S. Miller --- include/net/addrconf.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/net/addrconf.h b/include/net/addrconf.h index 1bab88184d3c..a088349dd94f 100644 --- a/include/net/addrconf.h +++ b/include/net/addrconf.h @@ -437,7 +437,7 @@ static inline void addrconf_addr_solict_mult(const struct in6_addr *addr, static inline bool ipv6_addr_is_ll_all_nodes(const struct in6_addr *addr) { #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 - __be64 *p = (__be64 *)addr; + __be64 *p = (__force __be64 *)addr; return ((p[0] ^ cpu_to_be64(0xff02000000000000UL)) | (p[1] ^ cpu_to_be64(1))) == 0UL; #else return ((addr->s6_addr32[0] ^ htonl(0xff020000)) | @@ -449,7 +449,7 @@ static inline bool ipv6_addr_is_ll_all_nodes(const struct in6_addr *addr) static inline bool ipv6_addr_is_ll_all_routers(const struct in6_addr *addr) { #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 - __be64 *p = (__be64 *)addr; + __be64 *p = (__force __be64 *)addr; return ((p[0] ^ cpu_to_be64(0xff02000000000000UL)) | (p[1] ^ cpu_to_be64(2))) == 0UL; #else return ((addr->s6_addr32[0] ^ htonl(0xff020000)) | @@ -466,7 +466,7 @@ static inline bool ipv6_addr_is_isatap(const struct in6_addr *addr) static inline bool ipv6_addr_is_solict_mult(const struct in6_addr *addr) { #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 - __be64 *p = (__be64 *)addr; + __be64 *p = (__force __be64 *)addr; return ((p[0] ^ cpu_to_be64(0xff02000000000000UL)) | ((p[1] ^ cpu_to_be64(0x00000001ff000000UL)) & cpu_to_be64(0xffffffffff000000UL))) == 0UL; @@ -481,7 +481,7 @@ static inline bool ipv6_addr_is_solict_mult(const struct in6_addr *addr) static inline bool ipv6_addr_is_all_snoopers(const struct in6_addr *addr) { #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 - __be64 *p = (__be64 *)addr; + __be64 *p = (__force __be64 *)addr; return ((p[0] ^ cpu_to_be64(0xff02000000000000UL)) | (p[1] ^ cpu_to_be64(0x6a))) == 0UL; -- cgit v1.2.3-71-gd317 From a2ec8b5706944d228181c8b91d815f41d6dd8e7b Mon Sep 17 00:00:00 2001 From: Josh Soref Date: Sun, 15 Dec 2019 22:08:02 +0100 Subject: wireguard: global: fix spelling mistakes in comments This fixes two spelling errors in source code comments. Signed-off-by: Josh Soref [Jason: rewrote commit message] Signed-off-by: Jason A. Donenfeld Signed-off-by: David S. Miller --- drivers/net/wireguard/receive.c | 2 +- include/uapi/linux/wireguard.h | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) (limited to 'include') diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c index 7e675f541491..9c6bab9c981f 100644 --- a/drivers/net/wireguard/receive.c +++ b/drivers/net/wireguard/receive.c @@ -380,7 +380,7 @@ static void wg_packet_consume_data_done(struct wg_peer *peer, /* We've already verified the Poly1305 auth tag, which means this packet * was not modified in transit. We can therefore tell the networking * stack that all checksums of every layer of encapsulation have already - * been checked "by the hardware" and therefore is unneccessary to check + * been checked "by the hardware" and therefore is unnecessary to check * again in software. */ skb->ip_summed = CHECKSUM_UNNECESSARY; diff --git a/include/uapi/linux/wireguard.h b/include/uapi/linux/wireguard.h index dd8a47c4ad11..ae88be14c947 100644 --- a/include/uapi/linux/wireguard.h +++ b/include/uapi/linux/wireguard.h @@ -18,13 +18,13 @@ * one but not both of: * * WGDEVICE_A_IFINDEX: NLA_U32 - * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMSIZ - 1 * * The kernel will then return several messages (NLM_F_MULTI) containing the * following tree of nested items: * * WGDEVICE_A_IFINDEX: NLA_U32 - * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMSIZ - 1 * WGDEVICE_A_PRIVATE_KEY: NLA_EXACT_LEN, len WG_KEY_LEN * WGDEVICE_A_PUBLIC_KEY: NLA_EXACT_LEN, len WG_KEY_LEN * WGDEVICE_A_LISTEN_PORT: NLA_U16 @@ -77,7 +77,7 @@ * WGDEVICE_A_IFINDEX and WGDEVICE_A_IFNAME: * * WGDEVICE_A_IFINDEX: NLA_U32 - * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMESIZ - 1 + * WGDEVICE_A_IFNAME: NLA_NUL_STRING, maxlen IFNAMSIZ - 1 * WGDEVICE_A_FLAGS: NLA_U32, 0 or WGDEVICE_F_REPLACE_PEERS if all current * peers should be removed prior to adding the list below. * WGDEVICE_A_PRIVATE_KEY: len WG_KEY_LEN, all zeros to remove @@ -121,7 +121,7 @@ * filling in information not contained in the prior. Note that if * WGDEVICE_F_REPLACE_PEERS is specified in the first message, it probably * should not be specified in fragments that come after, so that the list - * of peers is only cleared the first time but appened after. Likewise for + * of peers is only cleared the first time but appended after. Likewise for * peers, if WGPEER_F_REPLACE_ALLOWEDIPS is specified in the first message * of a peer, it likely should not be specified in subsequent fragments. * -- cgit v1.2.3-71-gd317 From 2f5e70c8ce47396bfa8f5c437574b569c02597bb Mon Sep 17 00:00:00 2001 From: Lukas Wunner Date: Wed, 20 Nov 2019 12:33:59 +0100 Subject: netfilter: Document ingress hook Amend kerneldoc of struct net_device to fix a "make htmldocs" warning: include/linux/netdevice.h:2045: warning: Function parameter or member 'nf_hooks_ingress' not described in 'net_device' Reported-by: kbuild test robot Signed-off-by: Lukas Wunner Cc: Daniel Borkmann Signed-off-by: Pablo Neira Ayuso --- include/linux/netdevice.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 30745068fb39..0b097bbd3663 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1708,6 +1708,7 @@ enum netdev_priv_flags { * @miniq_ingress: ingress/clsact qdisc specific data for * ingress processing * @ingress_queue: XXX: need comments on this one + * @nf_hooks_ingress: netfilter hooks executed for ingress packets * @broadcast: hw bcast address * * @rx_cpu_rmap: CPU reverse-mapping for RX completion interrupts, -- cgit v1.2.3-71-gd317 From d4aef159394d5940bd7158ab789969dab82f7c76 Mon Sep 17 00:00:00 2001 From: Soeren Moch Date: Thu, 12 Dec 2019 00:52:49 +0100 Subject: brcmfmac: add support for BCM4359 SDIO chipset BCM4359 is a 2x2 802.11 abgn+ac Dual-Band HT80 combo chip and it supports Real Simultaneous Dual Band feature. Based on a similar patch by: Wright Feng Signed-off-by: Soeren Moch Acked-by: Chi-Hsien Lin Acked-by: Ulf Hansson Signed-off-by: Kalle Valo --- drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c | 2 ++ drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c | 1 + drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c | 2 ++ include/linux/mmc/sdio_ids.h | 2 ++ 4 files changed, 7 insertions(+) (limited to 'include') diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c index 68baf0189305..f4c53ab46058 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c @@ -973,8 +973,10 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = { BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43455), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4354), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4356), + BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4359), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_CYPRESS_4373), BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_CYPRESS_43012), + BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_CYPRESS_89359), { /* end: all zeroes */ } }; MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c index baf72e3984fc..282d0bc14e8e 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c @@ -1408,6 +1408,7 @@ bool brcmf_chip_sr_capable(struct brcmf_chip *pub) addr = CORE_CC_REG(base, sr_control0); reg = chip->ops->read32(chip->ctx, addr); return (reg & CC_SR_CTL0_ENABLE_MASK) != 0; + case BRCM_CC_4359_CHIP_ID: case CY_CC_43012_CHIP_ID: addr = CORE_CC_REG(pmu->base, retention_ctl); reg = chip->ops->read32(chip->ctx, addr); diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c index 4d5b7770d4ea..f9df95bc7fa1 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c @@ -616,6 +616,7 @@ BRCMF_FW_DEF(43455, "brcmfmac43455-sdio"); BRCMF_FW_DEF(43456, "brcmfmac43456-sdio"); BRCMF_FW_DEF(4354, "brcmfmac4354-sdio"); BRCMF_FW_DEF(4356, "brcmfmac4356-sdio"); +BRCMF_FW_DEF(4359, "brcmfmac4359-sdio"); BRCMF_FW_DEF(4373, "brcmfmac4373-sdio"); BRCMF_FW_DEF(43012, "brcmfmac43012-sdio"); @@ -638,6 +639,7 @@ static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = { BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFDC0, 43455), BRCMF_FW_ENTRY(BRCM_CC_4354_CHIP_ID, 0xFFFFFFFF, 4354), BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356), + BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0xFFFFFFFF, 4359), BRCMF_FW_ENTRY(CY_CC_4373_CHIP_ID, 0xFFFFFFFF, 4373), BRCMF_FW_ENTRY(CY_CC_43012_CHIP_ID, 0xFFFFFFFF, 43012) }; diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h index 08b25c02b5a1..2e9a6e4634eb 100644 --- a/include/linux/mmc/sdio_ids.h +++ b/include/linux/mmc/sdio_ids.h @@ -41,8 +41,10 @@ #define SDIO_DEVICE_ID_BROADCOM_43455 0xa9bf #define SDIO_DEVICE_ID_BROADCOM_4354 0x4354 #define SDIO_DEVICE_ID_BROADCOM_4356 0x4356 +#define SDIO_DEVICE_ID_BROADCOM_4359 0x4359 #define SDIO_DEVICE_ID_CYPRESS_4373 0x4373 #define SDIO_DEVICE_ID_CYPRESS_43012 43012 +#define SDIO_DEVICE_ID_CYPRESS_89359 0x4355 #define SDIO_VENDOR_ID_INTEL 0x0089 #define SDIO_DEVICE_ID_INTEL_IWMC3200WIMAX 0x1402 -- cgit v1.2.3-71-gd317 From 504723af0d85434be5fb6f2dde0b62644a7f1ead Mon Sep 17 00:00:00 2001 From: Jose Abreu Date: Wed, 18 Dec 2019 11:33:05 +0100 Subject: net: stmmac: Add basic EST support for GMAC5+ Adds the support for EST in GMAC5+ cores. This feature allows to offload scheduling of queues opening time to the IP. Signed-off-by: Jose Abreu Signed-off-by: David S. Miller --- drivers/net/ethernet/stmicro/stmmac/common.h | 4 + drivers/net/ethernet/stmicro/stmmac/dwmac4.h | 9 +++ drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c | 2 + drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c | 3 + drivers/net/ethernet/stmicro/stmmac/dwmac5.c | 95 +++++++++++++++++++++++ drivers/net/ethernet/stmicro/stmmac/dwmac5.h | 19 +++++ drivers/net/ethernet/stmicro/stmmac/hwif.h | 5 ++ include/linux/stmmac.h | 13 ++++ 8 files changed, 150 insertions(+) (limited to 'include') diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index b210e987a1db..2f1fc93a9b1e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -363,6 +363,10 @@ struct dma_features { unsigned int dvlan; unsigned int l3l4fnum; unsigned int arpoffsel; + /* TSN Features */ + unsigned int estwid; + unsigned int estdep; + unsigned int estsel; }; /* GMAC TX FIFO is 8K, Rx FIFO is 16K */ diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h index 2dc70d104161..f7f5a2751147 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h @@ -176,6 +176,8 @@ enum power_event { #define GMAC_CONFIG_SARC GENMASK(30, 28) #define GMAC_CONFIG_SARC_SHIFT 28 #define GMAC_CONFIG_IPC BIT(27) +#define GMAC_CONFIG_IPG GENMASK(26, 24) +#define GMAC_CONFIG_IPG_SHIFT 24 #define GMAC_CONFIG_2K BIT(22) #define GMAC_CONFIG_ACS BIT(20) #define GMAC_CONFIG_BE BIT(18) @@ -183,6 +185,7 @@ enum power_event { #define GMAC_CONFIG_JE BIT(16) #define GMAC_CONFIG_PS BIT(15) #define GMAC_CONFIG_FES BIT(14) +#define GMAC_CONFIG_FES_SHIFT 14 #define GMAC_CONFIG_DM BIT(13) #define GMAC_CONFIG_LM BIT(12) #define GMAC_CONFIG_DCRS BIT(9) @@ -190,6 +193,9 @@ enum power_event { #define GMAC_CONFIG_RE BIT(0) /* MAC extended config */ +#define GMAC_CONFIG_EIPG GENMASK(29, 25) +#define GMAC_CONFIG_EIPG_SHIFT 25 +#define GMAC_CONFIG_EIPG_EN BIT(24) #define GMAC_CONFIG_HDSMS GENMASK(22, 20) #define GMAC_CONFIG_HDSMS_SHIFT 20 #define GMAC_CONFIG_HDSMS_256 (0x2 << GMAC_CONFIG_HDSMS_SHIFT) @@ -231,6 +237,9 @@ enum power_event { /* MAC HW features3 bitmap */ #define GMAC_HW_FEAT_ASP GENMASK(29, 28) +#define GMAC_HW_FEAT_ESTWID GENMASK(21, 20) +#define GMAC_HW_FEAT_ESTDEP GENMASK(19, 17) +#define GMAC_HW_FEAT_ESTSEL BIT(16) #define GMAC_HW_FEAT_FRPES GENMASK(14, 13) #define GMAC_HW_FEAT_FRPBS GENMASK(12, 11) #define GMAC_HW_FEAT_FRPSEL BIT(10) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c index 40ca00e596dd..8df7496411a0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c @@ -984,6 +984,7 @@ const struct stmmac_ops dwmac410_ops = { .set_arp_offload = dwmac4_set_arp_offload, .config_l3_filter = dwmac4_config_l3_filter, .config_l4_filter = dwmac4_config_l4_filter, + .est_configure = dwmac5_est_configure, }; const struct stmmac_ops dwmac510_ops = { @@ -1027,6 +1028,7 @@ const struct stmmac_ops dwmac510_ops = { .set_arp_offload = dwmac4_set_arp_offload, .config_l3_filter = dwmac4_config_l3_filter, .config_l4_filter = dwmac4_config_l4_filter, + .est_configure = dwmac5_est_configure, }; int dwmac4_setup(struct stmmac_priv *priv) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c index c15409030710..3552ce1201ff 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c @@ -404,6 +404,9 @@ static void dwmac4_get_hw_feature(void __iomem *ioaddr, /* 5.10 Features */ dma_cap->asp = (hw_cap & GMAC_HW_FEAT_ASP) >> 28; + dma_cap->estwid = (hw_cap & GMAC_HW_FEAT_ESTWID) >> 20; + dma_cap->estdep = (hw_cap & GMAC_HW_FEAT_ESTDEP) >> 17; + dma_cap->estsel = (hw_cap & GMAC_HW_FEAT_ESTSEL) >> 16; dma_cap->frpes = (hw_cap & GMAC_HW_FEAT_FRPES) >> 13; dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11; dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c index e436fa160c7d..8047f402fb3f 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c @@ -550,3 +550,98 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index, writel(val, ioaddr + MAC_PPS_CONTROL); return 0; } + +static int dwmac5_est_write(void __iomem *ioaddr, u32 reg, u32 val, bool gcl) +{ + u32 ctrl; + + writel(val, ioaddr + MTL_EST_GCL_DATA); + + ctrl = (reg << ADDR_SHIFT); + ctrl |= gcl ? 0 : GCRR; + + writel(ctrl, ioaddr + MTL_EST_GCL_CONTROL); + + ctrl |= SRWO; + writel(ctrl, ioaddr + MTL_EST_GCL_CONTROL); + + return readl_poll_timeout(ioaddr + MTL_EST_GCL_CONTROL, + ctrl, !(ctrl & SRWO), 100, 5000); +} + +int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg, + unsigned int ptp_rate) +{ + u32 speed, total_offset, offset, ctrl, ctr_low; + u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG); + u32 mac_cfg = readl(ioaddr + GMAC_CONFIG); + int i, ret = 0x0; + u64 total_ctr; + + if (extcfg & GMAC_CONFIG_EIPG_EN) { + offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT; + offset = 104 + (offset * 8); + } else { + offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT; + offset = 96 - (offset * 8); + } + + speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES); + speed = speed >> GMAC_CONFIG_FES_SHIFT; + + switch (speed) { + case 0x0: + offset = offset * 1000; /* 1G */ + break; + case 0x1: + offset = offset * 400; /* 2.5G */ + break; + case 0x2: + offset = offset * 100000; /* 10M */ + break; + case 0x3: + offset = offset * 10000; /* 100M */ + break; + default: + return -EINVAL; + } + + offset = offset / 1000; + + ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false); + ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false); + ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false); + ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false); + if (ret) + return ret; + + total_offset = 0; + for (i = 0; i < cfg->gcl_size; i++) { + ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true); + if (ret) + return ret; + + total_offset += offset; + } + + total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000; + total_ctr += total_offset; + + ctr_low = do_div(total_ctr, 1000000000); + + ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false); + ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false); + if (ret) + return ret; + + ctrl = readl(ioaddr + MTL_EST_CONTROL); + ctrl &= ~PTOV; + ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT; + if (cfg->enable) + ctrl |= EEST | SSWL; + else + ctrl &= ~EEST; + + writel(ctrl, ioaddr + MTL_EST_CONTROL); + return 0; +} diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.h b/drivers/net/ethernet/stmicro/stmmac/dwmac5.h index 23fecf68f781..70e6d8837dd9 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.h @@ -30,6 +30,23 @@ #define MAC_PPSx_INTERVAL(x) (0x00000b88 + ((x) * 0x10)) #define MAC_PPSx_WIDTH(x) (0x00000b8c + ((x) * 0x10)) +#define MTL_EST_CONTROL 0x00000c50 +#define PTOV GENMASK(31, 24) +#define PTOV_SHIFT 24 +#define SSWL BIT(1) +#define EEST BIT(0) +#define MTL_EST_GCL_CONTROL 0x00000c80 +#define BTR_LOW 0x0 +#define BTR_HIGH 0x1 +#define CTR_LOW 0x2 +#define CTR_HIGH 0x3 +#define TER 0x4 +#define LLR 0x5 +#define ADDR_SHIFT 8 +#define GCRR BIT(2) +#define SRWO BIT(0) +#define MTL_EST_GCL_DATA 0x00000c84 + #define MTL_RXP_CONTROL_STATUS 0x00000ca0 #define RXPI BIT(31) #define NPE GENMASK(23, 16) @@ -83,5 +100,7 @@ int dwmac5_rxp_config(void __iomem *ioaddr, struct stmmac_tc_entry *entries, int dwmac5_flex_pps_config(void __iomem *ioaddr, int index, struct stmmac_pps_cfg *cfg, bool enable, u32 sub_second_inc, u32 systime_flags); +int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg, + unsigned int ptp_rate); #endif /* __DWMAC5_H__ */ diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h index 098fbe7a5862..9fd34df6eb16 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.h +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h @@ -276,6 +276,7 @@ struct stmmac_safety_stats; struct stmmac_tc_entry; struct stmmac_pps_cfg; struct stmmac_rss; +struct stmmac_est; /* Helpers to program the MAC core */ struct stmmac_ops { @@ -373,6 +374,8 @@ struct stmmac_ops { bool en, bool udp, bool sa, bool inv, u32 match); void (*set_arp_offload)(struct mac_device_info *hw, bool en, u32 addr); + int (*est_configure)(void __iomem *ioaddr, struct stmmac_est *cfg, + unsigned int ptp_rate); }; #define stmmac_core_init(__priv, __args...) \ @@ -459,6 +462,8 @@ struct stmmac_ops { stmmac_do_callback(__priv, mac, config_l4_filter, __args) #define stmmac_set_arp_offload(__priv, __args...) \ stmmac_do_void_callback(__priv, mac, set_arp_offload, __args) +#define stmmac_est_configure(__priv, __args...) \ + stmmac_do_callback(__priv, mac, est_configure, __args) /* PTP and HW Timer helpers */ struct stmmac_hwtimestamp { diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h index d4bcd9387136..0531afa9b21e 100644 --- a/include/linux/stmmac.h +++ b/include/linux/stmmac.h @@ -109,6 +109,18 @@ struct stmmac_axi { bool axi_rb; }; +#define EST_GCL 1024 +struct stmmac_est { + int enable; + u32 btr_offset[2]; + u32 btr[2]; + u32 ctr[2]; + u32 ter; + u32 gcl_unaligned[EST_GCL]; + u32 gcl[EST_GCL]; + u32 gcl_size; +}; + struct stmmac_rxq_cfg { u8 mode_to_use; u32 chan; @@ -139,6 +151,7 @@ struct plat_stmmacenet_data { struct device_node *phylink_node; struct device_node *mdio_node; struct stmmac_dma_cfg *dma_cfg; + struct stmmac_est *est; int clk_csr; int has_gmac; int enh_desc; -- cgit v1.2.3-71-gd317 From 9586a992fb752b14ac18fc86fe086b0e3372fff4 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Wed, 18 Dec 2019 14:55:08 +0000 Subject: net: pkt_cls: Clarify a comment The bit about negating HW backlog left me scratching my head. Clarify the comment. Signed-off-by: Petr Machata Reviewed-by: Ido Schimmel Acked-by: Jiri Pirko Signed-off-by: David S. Miller --- include/net/pkt_cls.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index e553fc80eb23..a7c5d492bc04 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -791,9 +791,8 @@ enum tc_prio_command { struct tc_prio_qopt_offload_params { int bands; u8 priomap[TC_PRIO_MAX + 1]; - /* In case that a prio qdisc is offloaded and now is changed to a - * non-offloadedable config, it needs to update the backlog & qlen - * values to negate the HW backlog & qlen values (and only them). + /* At the point of un-offloading the Qdisc, the reported backlog and + * qlen need to be reduced by the portion that is in HW. */ struct gnet_stats_queue *qstats; }; -- cgit v1.2.3-71-gd317 From dcc68b4d8084e1ac9af0d4022d6b1aff6a139a33 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Wed, 18 Dec 2019 14:55:13 +0000 Subject: net: sch_ets: Add a new Qdisc Introduces a new Qdisc, which is based on 802.1Q-2014 wording. It is PRIO-like in how it is configured, meaning one needs to specify how many bands there are, how many are strict and how many are dwrr, quanta for the latter, and priomap. The new Qdisc operates like the PRIO / DRR combo would when configured as per the standard. The strict classes, if any, are tried for traffic first. When there's no traffic in any of the strict queues, the ETS ones (if any) are treated in the same way as in DRR. Signed-off-by: Petr Machata Acked-by: Jiri Pirko Signed-off-by: David S. Miller --- include/uapi/linux/pkt_sched.h | 17 + net/sched/Kconfig | 17 + net/sched/Makefile | 1 + net/sched/sch_ets.c | 733 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 768 insertions(+) create mode 100644 net/sched/sch_ets.c (limited to 'include') diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 9f1a72876212..bf5a5b1dfb0b 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -1187,4 +1187,21 @@ enum { #define TCA_TAPRIO_ATTR_MAX (__TCA_TAPRIO_ATTR_MAX - 1) +/* ETS */ + +#define TCQ_ETS_MAX_BANDS 16 + +enum { + TCA_ETS_UNSPEC, + TCA_ETS_NBANDS, /* u8 */ + TCA_ETS_NSTRICT, /* u8 */ + TCA_ETS_QUANTA, /* nested TCA_ETS_QUANTA_BAND */ + TCA_ETS_QUANTA_BAND, /* u32 */ + TCA_ETS_PRIOMAP, /* nested TCA_ETS_PRIOMAP_BAND */ + TCA_ETS_PRIOMAP_BAND, /* u8 */ + __TCA_ETS_MAX, +}; + +#define TCA_ETS_MAX (__TCA_ETS_MAX - 1) + #endif diff --git a/net/sched/Kconfig b/net/sched/Kconfig index 2985509147a2..b1e7ec726958 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -409,6 +409,23 @@ config NET_SCH_PLUG To compile this code as a module, choose M here: the module will be called sch_plug. +config NET_SCH_ETS + tristate "Enhanced transmission selection scheduler (ETS)" + help + The Enhanced Transmission Selection scheduler is a classful + queuing discipline that merges functionality of PRIO and DRR + qdiscs in one scheduler. ETS makes it easy to configure a set of + strict and bandwidth-sharing bands to implement the transmission + selection described in 802.1Qaz. + + Say Y here if you want to use the ETS packet scheduling + algorithm. + + To compile this driver as a module, choose M here: the module + will be called sch_ets. + + If unsure, say N. + menuconfig NET_SCH_DEFAULT bool "Allow override default queue discipline" ---help--- diff --git a/net/sched/Makefile b/net/sched/Makefile index 415d1e1f237e..bc8856b865ff 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -48,6 +48,7 @@ obj-$(CONFIG_NET_SCH_ATM) += sch_atm.o obj-$(CONFIG_NET_SCH_NETEM) += sch_netem.o obj-$(CONFIG_NET_SCH_DRR) += sch_drr.o obj-$(CONFIG_NET_SCH_PLUG) += sch_plug.o +obj-$(CONFIG_NET_SCH_ETS) += sch_ets.o obj-$(CONFIG_NET_SCH_MQPRIO) += sch_mqprio.o obj-$(CONFIG_NET_SCH_SKBPRIO) += sch_skbprio.o obj-$(CONFIG_NET_SCH_CHOKE) += sch_choke.o diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c new file mode 100644 index 000000000000..e6194b23e9b0 --- /dev/null +++ b/net/sched/sch_ets.c @@ -0,0 +1,733 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * net/sched/sch_ets.c Enhanced Transmission Selection scheduler + * + * Description + * ----------- + * + * The Enhanced Transmission Selection scheduler is a classful queuing + * discipline that merges functionality of PRIO and DRR qdiscs in one scheduler. + * ETS makes it easy to configure a set of strict and bandwidth-sharing bands to + * implement the transmission selection described in 802.1Qaz. + * + * Although ETS is technically classful, it's not possible to add and remove + * classes at will. Instead one specifies number of classes, how many are + * PRIO-like and how many DRR-like, and quanta for the latter. + * + * Algorithm + * --------- + * + * The strict classes, if any, are tried for traffic first: first band 0, if it + * has no traffic then band 1, etc. + * + * When there is no traffic in any of the strict queues, the bandwidth-sharing + * ones are tried next. Each band is assigned a deficit counter, initialized to + * "quantum" of that band. ETS maintains a list of active bandwidth-sharing + * bands whose qdiscs are non-empty. A packet is dequeued from the band at the + * head of the list if the packet size is smaller or equal to the deficit + * counter. If the counter is too small, it is increased by "quantum" and the + * scheduler moves on to the next band in the active list. + */ + +#include +#include +#include +#include +#include +#include + +struct ets_class { + struct list_head alist; /* In struct ets_sched.active. */ + struct Qdisc *qdisc; + u32 quantum; + u32 deficit; + struct gnet_stats_basic_packed bstats; + struct gnet_stats_queue qstats; +}; + +struct ets_sched { + struct list_head active; + struct tcf_proto __rcu *filter_list; + struct tcf_block *block; + unsigned int nbands; + unsigned int nstrict; + u8 prio2band[TC_PRIO_MAX + 1]; + struct ets_class classes[TCQ_ETS_MAX_BANDS]; +}; + +static const struct nla_policy ets_policy[TCA_ETS_MAX + 1] = { + [TCA_ETS_NBANDS] = { .type = NLA_U8 }, + [TCA_ETS_NSTRICT] = { .type = NLA_U8 }, + [TCA_ETS_QUANTA] = { .type = NLA_NESTED }, + [TCA_ETS_PRIOMAP] = { .type = NLA_NESTED }, +}; + +static const struct nla_policy ets_priomap_policy[TCA_ETS_MAX + 1] = { + [TCA_ETS_PRIOMAP_BAND] = { .type = NLA_U8 }, +}; + +static const struct nla_policy ets_quanta_policy[TCA_ETS_MAX + 1] = { + [TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 }, +}; + +static const struct nla_policy ets_class_policy[TCA_ETS_MAX + 1] = { + [TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 }, +}; + +static int ets_quantum_parse(struct Qdisc *sch, const struct nlattr *attr, + unsigned int *quantum, + struct netlink_ext_ack *extack) +{ + *quantum = nla_get_u32(attr); + if (!*quantum) { + NL_SET_ERR_MSG(extack, "ETS quantum cannot be zero"); + return -EINVAL; + } + return 0; +} + +static struct ets_class * +ets_class_from_arg(struct Qdisc *sch, unsigned long arg) +{ + struct ets_sched *q = qdisc_priv(sch); + + return &q->classes[arg - 1]; +} + +static u32 ets_class_id(struct Qdisc *sch, const struct ets_class *cl) +{ + struct ets_sched *q = qdisc_priv(sch); + int band = cl - q->classes; + + return TC_H_MAKE(sch->handle, band + 1); +} + +static bool ets_class_is_strict(struct ets_sched *q, const struct ets_class *cl) +{ + unsigned int band = cl - q->classes; + + return band < q->nstrict; +} + +static int ets_class_change(struct Qdisc *sch, u32 classid, u32 parentid, + struct nlattr **tca, unsigned long *arg, + struct netlink_ext_ack *extack) +{ + struct ets_class *cl = ets_class_from_arg(sch, *arg); + struct ets_sched *q = qdisc_priv(sch); + struct nlattr *opt = tca[TCA_OPTIONS]; + struct nlattr *tb[TCA_ETS_MAX + 1]; + unsigned int quantum; + int err; + + /* Classes can be added and removed only through Qdisc_ops.change + * interface. + */ + if (!cl) { + NL_SET_ERR_MSG(extack, "Fine-grained class addition and removal is not supported"); + return -EOPNOTSUPP; + } + + if (!opt) { + NL_SET_ERR_MSG(extack, "ETS options are required for this operation"); + return -EINVAL; + } + + err = nla_parse_nested(tb, TCA_ETS_MAX, opt, ets_class_policy, extack); + if (err < 0) + return err; + + if (!tb[TCA_ETS_QUANTA_BAND]) + /* Nothing to configure. */ + return 0; + + if (ets_class_is_strict(q, cl)) { + NL_SET_ERR_MSG(extack, "Strict bands do not have a configurable quantum"); + return -EINVAL; + } + + err = ets_quantum_parse(sch, tb[TCA_ETS_QUANTA_BAND], &quantum, + extack); + if (err) + return err; + + sch_tree_lock(sch); + cl->quantum = quantum; + sch_tree_unlock(sch); + return 0; +} + +static int ets_class_graft(struct Qdisc *sch, unsigned long arg, + struct Qdisc *new, struct Qdisc **old, + struct netlink_ext_ack *extack) +{ + struct ets_class *cl = ets_class_from_arg(sch, arg); + + if (!new) { + new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, + ets_class_id(sch, cl), NULL); + if (!new) + new = &noop_qdisc; + else + qdisc_hash_add(new, true); + } + + *old = qdisc_replace(sch, new, &cl->qdisc); + return 0; +} + +static struct Qdisc *ets_class_leaf(struct Qdisc *sch, unsigned long arg) +{ + struct ets_class *cl = ets_class_from_arg(sch, arg); + + return cl->qdisc; +} + +static unsigned long ets_class_find(struct Qdisc *sch, u32 classid) +{ + unsigned long band = TC_H_MIN(classid); + struct ets_sched *q = qdisc_priv(sch); + + if (band - 1 >= q->nbands) + return 0; + return band; +} + +static void ets_class_qlen_notify(struct Qdisc *sch, unsigned long arg) +{ + struct ets_class *cl = ets_class_from_arg(sch, arg); + struct ets_sched *q = qdisc_priv(sch); + + /* We get notified about zero-length child Qdiscs as well if they are + * offloaded. Those aren't on the active list though, so don't attempt + * to remove them. + */ + if (!ets_class_is_strict(q, cl) && sch->q.qlen) + list_del(&cl->alist); +} + +static int ets_class_dump(struct Qdisc *sch, unsigned long arg, + struct sk_buff *skb, struct tcmsg *tcm) +{ + struct ets_class *cl = ets_class_from_arg(sch, arg); + struct ets_sched *q = qdisc_priv(sch); + struct nlattr *nest; + + tcm->tcm_parent = TC_H_ROOT; + tcm->tcm_handle = ets_class_id(sch, cl); + tcm->tcm_info = cl->qdisc->handle; + + nest = nla_nest_start_noflag(skb, TCA_OPTIONS); + if (!nest) + goto nla_put_failure; + if (!ets_class_is_strict(q, cl)) { + if (nla_put_u32(skb, TCA_ETS_QUANTA_BAND, cl->quantum)) + goto nla_put_failure; + } + return nla_nest_end(skb, nest); + +nla_put_failure: + nla_nest_cancel(skb, nest); + return -EMSGSIZE; +} + +static int ets_class_dump_stats(struct Qdisc *sch, unsigned long arg, + struct gnet_dump *d) +{ + struct ets_class *cl = ets_class_from_arg(sch, arg); + struct Qdisc *cl_q = cl->qdisc; + + if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch), + d, NULL, &cl_q->bstats) < 0 || + qdisc_qstats_copy(d, cl_q) < 0) + return -1; + + return 0; +} + +static void ets_qdisc_walk(struct Qdisc *sch, struct qdisc_walker *arg) +{ + struct ets_sched *q = qdisc_priv(sch); + int i; + + if (arg->stop) + return; + + for (i = 0; i < q->nbands; i++) { + if (arg->count < arg->skip) { + arg->count++; + continue; + } + if (arg->fn(sch, i + 1, arg) < 0) { + arg->stop = 1; + break; + } + arg->count++; + } +} + +static struct tcf_block * +ets_qdisc_tcf_block(struct Qdisc *sch, unsigned long cl, + struct netlink_ext_ack *extack) +{ + struct ets_sched *q = qdisc_priv(sch); + + if (cl) { + NL_SET_ERR_MSG(extack, "ETS classid must be zero"); + return NULL; + } + + return q->block; +} + +static unsigned long ets_qdisc_bind_tcf(struct Qdisc *sch, unsigned long parent, + u32 classid) +{ + return ets_class_find(sch, classid); +} + +static void ets_qdisc_unbind_tcf(struct Qdisc *sch, unsigned long arg) +{ +} + +static struct ets_class *ets_classify(struct sk_buff *skb, struct Qdisc *sch, + int *qerr) +{ + struct ets_sched *q = qdisc_priv(sch); + u32 band = skb->priority; + struct tcf_result res; + struct tcf_proto *fl; + int err; + + *qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS; + if (TC_H_MAJ(skb->priority) != sch->handle) { + fl = rcu_dereference_bh(q->filter_list); + err = tcf_classify(skb, fl, &res, false); +#ifdef CONFIG_NET_CLS_ACT + switch (err) { + case TC_ACT_STOLEN: + case TC_ACT_QUEUED: + case TC_ACT_TRAP: + *qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN; + /* fall through */ + case TC_ACT_SHOT: + return NULL; + } +#endif + if (!fl || err < 0) { + if (TC_H_MAJ(band)) + band = 0; + return &q->classes[q->prio2band[band & TC_PRIO_MAX]]; + } + band = res.classid; + } + band = TC_H_MIN(band) - 1; + if (band >= q->nbands) + return &q->classes[q->prio2band[0]]; + return &q->classes[band]; +} + +static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, + struct sk_buff **to_free) +{ + unsigned int len = qdisc_pkt_len(skb); + struct ets_sched *q = qdisc_priv(sch); + struct ets_class *cl; + int err = 0; + bool first; + + cl = ets_classify(skb, sch, &err); + if (!cl) { + if (err & __NET_XMIT_BYPASS) + qdisc_qstats_drop(sch); + __qdisc_drop(skb, to_free); + return err; + } + + first = !cl->qdisc->q.qlen; + err = qdisc_enqueue(skb, cl->qdisc, to_free); + if (unlikely(err != NET_XMIT_SUCCESS)) { + if (net_xmit_drop_count(err)) { + cl->qstats.drops++; + qdisc_qstats_drop(sch); + } + return err; + } + + if (first && !ets_class_is_strict(q, cl)) { + list_add_tail(&cl->alist, &q->active); + cl->deficit = cl->quantum; + } + + sch->qstats.backlog += len; + sch->q.qlen++; + return err; +} + +static struct sk_buff * +ets_qdisc_dequeue_skb(struct Qdisc *sch, struct sk_buff *skb) +{ + qdisc_bstats_update(sch, skb); + qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + return skb; +} + +static struct sk_buff *ets_qdisc_dequeue(struct Qdisc *sch) +{ + struct ets_sched *q = qdisc_priv(sch); + struct ets_class *cl; + struct sk_buff *skb; + unsigned int band; + unsigned int len; + + while (1) { + for (band = 0; band < q->nstrict; band++) { + cl = &q->classes[band]; + skb = qdisc_dequeue_peeked(cl->qdisc); + if (skb) + return ets_qdisc_dequeue_skb(sch, skb); + } + + if (list_empty(&q->active)) + goto out; + + cl = list_first_entry(&q->active, struct ets_class, alist); + skb = cl->qdisc->ops->peek(cl->qdisc); + if (!skb) { + qdisc_warn_nonwc(__func__, cl->qdisc); + goto out; + } + + len = qdisc_pkt_len(skb); + if (len <= cl->deficit) { + cl->deficit -= len; + skb = qdisc_dequeue_peeked(cl->qdisc); + if (unlikely(!skb)) + goto out; + if (cl->qdisc->q.qlen == 0) + list_del(&cl->alist); + return ets_qdisc_dequeue_skb(sch, skb); + } + + cl->deficit += cl->quantum; + list_move_tail(&cl->alist, &q->active); + } +out: + return NULL; +} + +static int ets_qdisc_priomap_parse(struct nlattr *priomap_attr, + unsigned int nbands, u8 *priomap, + struct netlink_ext_ack *extack) +{ + const struct nlattr *attr; + int prio = 0; + u8 band; + int rem; + int err; + + err = __nla_validate_nested(priomap_attr, TCA_ETS_MAX, + ets_priomap_policy, NL_VALIDATE_STRICT, + extack); + if (err) + return err; + + nla_for_each_nested(attr, priomap_attr, rem) { + switch (nla_type(attr)) { + case TCA_ETS_PRIOMAP_BAND: + if (prio > TC_PRIO_MAX) { + NL_SET_ERR_MSG_MOD(extack, "Too many priorities in ETS priomap"); + return -EINVAL; + } + band = nla_get_u8(attr); + if (band >= nbands) { + NL_SET_ERR_MSG_MOD(extack, "Invalid band number in ETS priomap"); + return -EINVAL; + } + priomap[prio++] = band; + break; + default: + WARN_ON_ONCE(1); /* Validate should have caught this. */ + return -EINVAL; + } + } + + return 0; +} + +static int ets_qdisc_quanta_parse(struct Qdisc *sch, struct nlattr *quanta_attr, + unsigned int nbands, unsigned int nstrict, + unsigned int *quanta, + struct netlink_ext_ack *extack) +{ + const struct nlattr *attr; + int band = nstrict; + int rem; + int err; + + err = __nla_validate_nested(quanta_attr, TCA_ETS_MAX, + ets_quanta_policy, NL_VALIDATE_STRICT, + extack); + if (err < 0) + return err; + + nla_for_each_nested(attr, quanta_attr, rem) { + switch (nla_type(attr)) { + case TCA_ETS_QUANTA_BAND: + if (band >= nbands) { + NL_SET_ERR_MSG_MOD(extack, "ETS quanta has more values than bands"); + return -EINVAL; + } + err = ets_quantum_parse(sch, attr, &quanta[band++], + extack); + if (err) + return err; + break; + default: + WARN_ON_ONCE(1); /* Validate should have caught this. */ + return -EINVAL; + } + } + + return 0; +} + +static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + unsigned int quanta[TCQ_ETS_MAX_BANDS] = {0}; + struct Qdisc *queues[TCQ_ETS_MAX_BANDS]; + struct ets_sched *q = qdisc_priv(sch); + struct nlattr *tb[TCA_ETS_MAX + 1]; + unsigned int oldbands = q->nbands; + u8 priomap[TC_PRIO_MAX + 1]; + unsigned int nstrict = 0; + unsigned int nbands; + unsigned int i; + int err; + + if (!opt) { + NL_SET_ERR_MSG(extack, "ETS options are required for this operation"); + return -EINVAL; + } + + err = nla_parse_nested(tb, TCA_ETS_MAX, opt, ets_policy, extack); + if (err < 0) + return err; + + if (!tb[TCA_ETS_NBANDS]) { + NL_SET_ERR_MSG_MOD(extack, "Number of bands is a required argument"); + return -EINVAL; + } + nbands = nla_get_u8(tb[TCA_ETS_NBANDS]); + if (nbands < 1 || nbands > TCQ_ETS_MAX_BANDS) { + NL_SET_ERR_MSG_MOD(extack, "Invalid number of bands"); + return -EINVAL; + } + /* Unless overridden, traffic goes to the last band. */ + memset(priomap, nbands - 1, sizeof(priomap)); + + if (tb[TCA_ETS_NSTRICT]) { + nstrict = nla_get_u8(tb[TCA_ETS_NSTRICT]); + if (nstrict > nbands) { + NL_SET_ERR_MSG_MOD(extack, "Invalid number of strict bands"); + return -EINVAL; + } + } + + if (tb[TCA_ETS_PRIOMAP]) { + err = ets_qdisc_priomap_parse(tb[TCA_ETS_PRIOMAP], + nbands, priomap, extack); + if (err) + return err; + } + + if (tb[TCA_ETS_QUANTA]) { + err = ets_qdisc_quanta_parse(sch, tb[TCA_ETS_QUANTA], + nbands, nstrict, quanta, extack); + if (err) + return err; + } + /* If there are more bands than strict + quanta provided, the remaining + * ones are ETS with quantum of MTU. Initialize the missing values here. + */ + for (i = nstrict; i < nbands; i++) { + if (!quanta[i]) + quanta[i] = psched_mtu(qdisc_dev(sch)); + } + + /* Before commit, make sure we can allocate all new qdiscs */ + for (i = oldbands; i < nbands; i++) { + queues[i] = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, + ets_class_id(sch, &q->classes[i]), + extack); + if (!queues[i]) { + while (i > oldbands) + qdisc_put(queues[--i]); + return -ENOMEM; + } + } + + sch_tree_lock(sch); + + q->nbands = nbands; + q->nstrict = nstrict; + memcpy(q->prio2band, priomap, sizeof(priomap)); + + for (i = q->nbands; i < oldbands; i++) + qdisc_tree_flush_backlog(q->classes[i].qdisc); + + for (i = 0; i < q->nbands; i++) + q->classes[i].quantum = quanta[i]; + + for (i = oldbands; i < q->nbands; i++) { + q->classes[i].qdisc = queues[i]; + if (q->classes[i].qdisc != &noop_qdisc) + qdisc_hash_add(q->classes[i].qdisc, true); + } + + sch_tree_unlock(sch); + + for (i = q->nbands; i < oldbands; i++) { + qdisc_put(q->classes[i].qdisc); + memset(&q->classes[i], 0, sizeof(q->classes[i])); + } + return 0; +} + +static int ets_qdisc_init(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct ets_sched *q = qdisc_priv(sch); + int err; + + if (!opt) + return -EINVAL; + + err = tcf_block_get(&q->block, &q->filter_list, sch, extack); + if (err) + return err; + + INIT_LIST_HEAD(&q->active); + return ets_qdisc_change(sch, opt, extack); +} + +static void ets_qdisc_reset(struct Qdisc *sch) +{ + struct ets_sched *q = qdisc_priv(sch); + int band; + + for (band = q->nstrict; band < q->nbands; band++) { + if (q->classes[band].qdisc->q.qlen) + list_del(&q->classes[band].alist); + } + for (band = 0; band < q->nbands; band++) + qdisc_reset(q->classes[band].qdisc); + sch->qstats.backlog = 0; + sch->q.qlen = 0; +} + +static void ets_qdisc_destroy(struct Qdisc *sch) +{ + struct ets_sched *q = qdisc_priv(sch); + int band; + + tcf_block_put(q->block); + for (band = 0; band < q->nbands; band++) + qdisc_put(q->classes[band].qdisc); +} + +static int ets_qdisc_dump(struct Qdisc *sch, struct sk_buff *skb) +{ + struct ets_sched *q = qdisc_priv(sch); + struct nlattr *opts; + struct nlattr *nest; + int band; + int prio; + + opts = nla_nest_start_noflag(skb, TCA_OPTIONS); + if (!opts) + goto nla_err; + + if (nla_put_u8(skb, TCA_ETS_NBANDS, q->nbands)) + goto nla_err; + + if (q->nstrict && + nla_put_u8(skb, TCA_ETS_NSTRICT, q->nstrict)) + goto nla_err; + + if (q->nbands > q->nstrict) { + nest = nla_nest_start(skb, TCA_ETS_QUANTA); + if (!nest) + goto nla_err; + + for (band = q->nstrict; band < q->nbands; band++) { + if (nla_put_u32(skb, TCA_ETS_QUANTA_BAND, + q->classes[band].quantum)) + goto nla_err; + } + + nla_nest_end(skb, nest); + } + + nest = nla_nest_start(skb, TCA_ETS_PRIOMAP); + if (!nest) + goto nla_err; + + for (prio = 0; prio <= TC_PRIO_MAX; prio++) { + if (nla_put_u8(skb, TCA_ETS_PRIOMAP_BAND, q->prio2band[prio])) + goto nla_err; + } + + nla_nest_end(skb, nest); + + return nla_nest_end(skb, opts); + +nla_err: + nla_nest_cancel(skb, opts); + return -EMSGSIZE; +} + +static const struct Qdisc_class_ops ets_class_ops = { + .change = ets_class_change, + .graft = ets_class_graft, + .leaf = ets_class_leaf, + .find = ets_class_find, + .qlen_notify = ets_class_qlen_notify, + .dump = ets_class_dump, + .dump_stats = ets_class_dump_stats, + .walk = ets_qdisc_walk, + .tcf_block = ets_qdisc_tcf_block, + .bind_tcf = ets_qdisc_bind_tcf, + .unbind_tcf = ets_qdisc_unbind_tcf, +}; + +static struct Qdisc_ops ets_qdisc_ops __read_mostly = { + .cl_ops = &ets_class_ops, + .id = "ets", + .priv_size = sizeof(struct ets_sched), + .enqueue = ets_qdisc_enqueue, + .dequeue = ets_qdisc_dequeue, + .peek = qdisc_peek_dequeued, + .change = ets_qdisc_change, + .init = ets_qdisc_init, + .reset = ets_qdisc_reset, + .destroy = ets_qdisc_destroy, + .dump = ets_qdisc_dump, + .owner = THIS_MODULE, +}; + +static int __init ets_init(void) +{ + return register_qdisc(&ets_qdisc_ops); +} + +static void __exit ets_exit(void) +{ + unregister_qdisc(&ets_qdisc_ops); +} + +module_init(ets_init); +module_exit(ets_exit); +MODULE_LICENSE("GPL"); -- cgit v1.2.3-71-gd317 From d35eb52bd2ac7557b62bda52668f2e64dde2cf90 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Wed, 18 Dec 2019 14:55:15 +0000 Subject: net: sch_ets: Make the ETS qdisc offloadable Add hooks at appropriate points to make it possible to offload the ETS Qdisc. Signed-off-by: Petr Machata Acked-by: Jiri Pirko Reviewed-by: Ido Schimmel Signed-off-by: David S. Miller --- include/linux/netdevice.h | 1 + include/net/pkt_cls.h | 31 ++++++++++++++++ net/sched/sch_ets.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 127 insertions(+) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 30745068fb39..7a8ed11f5d45 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -849,6 +849,7 @@ enum tc_setup_type { TC_SETUP_QDISC_GRED, TC_SETUP_QDISC_TAPRIO, TC_SETUP_FT, + TC_SETUP_QDISC_ETS, }; /* These structures hold the attributes of bpf state that are being passed diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index a7c5d492bc04..47b115e2012a 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -823,4 +823,35 @@ struct tc_root_qopt_offload { bool ingress; }; +enum tc_ets_command { + TC_ETS_REPLACE, + TC_ETS_DESTROY, + TC_ETS_STATS, + TC_ETS_GRAFT, +}; + +struct tc_ets_qopt_offload_replace_params { + unsigned int bands; + u8 priomap[TC_PRIO_MAX + 1]; + unsigned int quanta[TCQ_ETS_MAX_BANDS]; /* 0 for strict bands. */ + unsigned int weights[TCQ_ETS_MAX_BANDS]; + struct gnet_stats_queue *qstats; +}; + +struct tc_ets_qopt_offload_graft_params { + u8 band; + u32 child_handle; +}; + +struct tc_ets_qopt_offload { + enum tc_ets_command command; + u32 handle; + u32 parent; + union { + struct tc_ets_qopt_offload_replace_params replace_params; + struct tc_qopt_offload_stats stats; + struct tc_ets_qopt_offload_graft_params graft_params; + }; +}; + #endif diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c index e6194b23e9b0..a87e9159338c 100644 --- a/net/sched/sch_ets.c +++ b/net/sched/sch_ets.c @@ -102,6 +102,91 @@ static u32 ets_class_id(struct Qdisc *sch, const struct ets_class *cl) return TC_H_MAKE(sch->handle, band + 1); } +static void ets_offload_change(struct Qdisc *sch) +{ + struct net_device *dev = qdisc_dev(sch); + struct ets_sched *q = qdisc_priv(sch); + struct tc_ets_qopt_offload qopt; + unsigned int w_psum_prev = 0; + unsigned int q_psum = 0; + unsigned int q_sum = 0; + unsigned int quantum; + unsigned int w_psum; + unsigned int weight; + unsigned int i; + + if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) + return; + + qopt.command = TC_ETS_REPLACE; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + qopt.replace_params.bands = q->nbands; + qopt.replace_params.qstats = &sch->qstats; + memcpy(&qopt.replace_params.priomap, + q->prio2band, sizeof(q->prio2band)); + + for (i = 0; i < q->nbands; i++) + q_sum += q->classes[i].quantum; + + for (i = 0; i < q->nbands; i++) { + quantum = q->classes[i].quantum; + q_psum += quantum; + w_psum = quantum ? q_psum * 100 / q_sum : 0; + weight = w_psum - w_psum_prev; + w_psum_prev = w_psum; + + qopt.replace_params.quanta[i] = quantum; + qopt.replace_params.weights[i] = weight; + } + + dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_ETS, &qopt); +} + +static void ets_offload_destroy(struct Qdisc *sch) +{ + struct net_device *dev = qdisc_dev(sch); + struct tc_ets_qopt_offload qopt; + + if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) + return; + + qopt.command = TC_ETS_DESTROY; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_ETS, &qopt); +} + +static void ets_offload_graft(struct Qdisc *sch, struct Qdisc *new, + struct Qdisc *old, unsigned long arg, + struct netlink_ext_ack *extack) +{ + struct net_device *dev = qdisc_dev(sch); + struct tc_ets_qopt_offload qopt; + + qopt.command = TC_ETS_GRAFT; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + qopt.graft_params.band = arg - 1; + qopt.graft_params.child_handle = new->handle; + + qdisc_offload_graft_helper(dev, sch, new, old, TC_SETUP_QDISC_ETS, + &qopt, extack); +} + +static int ets_offload_dump(struct Qdisc *sch) +{ + struct tc_ets_qopt_offload qopt; + + qopt.command = TC_ETS_STATS; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + qopt.stats.bstats = &sch->bstats; + qopt.stats.qstats = &sch->qstats; + + return qdisc_offload_dump_helper(sch, TC_SETUP_QDISC_ETS, &qopt); +} + static bool ets_class_is_strict(struct ets_sched *q, const struct ets_class *cl) { unsigned int band = cl - q->classes; @@ -154,6 +239,8 @@ static int ets_class_change(struct Qdisc *sch, u32 classid, u32 parentid, sch_tree_lock(sch); cl->quantum = quantum; sch_tree_unlock(sch); + + ets_offload_change(sch); return 0; } @@ -173,6 +260,7 @@ static int ets_class_graft(struct Qdisc *sch, unsigned long arg, } *old = qdisc_replace(sch, new, &cl->qdisc); + ets_offload_graft(sch, new, *old, arg, extack); return 0; } @@ -589,6 +677,7 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt, sch_tree_unlock(sch); + ets_offload_change(sch); for (i = q->nbands; i < oldbands; i++) { qdisc_put(q->classes[i].qdisc); memset(&q->classes[i], 0, sizeof(q->classes[i])); @@ -633,6 +722,7 @@ static void ets_qdisc_destroy(struct Qdisc *sch) struct ets_sched *q = qdisc_priv(sch); int band; + ets_offload_destroy(sch); tcf_block_put(q->block); for (band = 0; band < q->nbands; band++) qdisc_put(q->classes[band].qdisc); @@ -645,6 +735,11 @@ static int ets_qdisc_dump(struct Qdisc *sch, struct sk_buff *skb) struct nlattr *nest; int band; int prio; + int err; + + err = ets_offload_dump(sch); + if (err) + return err; opts = nla_nest_start_noflag(skb, TCA_OPTIONS); if (!opts) -- cgit v1.2.3-71-gd317 From 2a10ab043ac5a658225ee77852db7942de9ac4c5 Mon Sep 17 00:00:00 2001 From: Russell King Date: Tue, 17 Dec 2019 13:39:11 +0000 Subject: net: phy: add genphy_check_and_restart_aneg() Add a helper for restarting autonegotiation(), similar to the clause 45 variant. Use it in __genphy_config_aneg() Signed-off-by: Russell King Reviewed-by: Andrew Lunn Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/phy/phy_device.c | 48 ++++++++++++++++++++++++++++---------------- include/linux/phy.h | 1 + 2 files changed, 32 insertions(+), 17 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index acbdced20c11..c61f2295e21e 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -1770,6 +1770,36 @@ int genphy_restart_aneg(struct phy_device *phydev) } EXPORT_SYMBOL(genphy_restart_aneg); +/** + * genphy_check_and_restart_aneg - Enable and restart auto-negotiation + * @phydev: target phy_device struct + * @restart: whether aneg restart is requested + * + * Check, and restart auto-negotiation if needed. + */ +int genphy_check_and_restart_aneg(struct phy_device *phydev, bool restart) +{ + int ret = 0; + + if (!restart) { + /* Advertisement hasn't changed, but maybe aneg was never on to + * begin with? Or maybe phy was isolated? + */ + ret = phy_read(phydev, MII_BMCR); + if (ret < 0) + return ret; + + if (!(ret & BMCR_ANENABLE) || (ret & BMCR_ISOLATE)) + restart = true; + } + + if (restart) + ret = genphy_restart_aneg(phydev); + + return ret; +} +EXPORT_SYMBOL(genphy_check_and_restart_aneg); + /** * __genphy_config_aneg - restart auto-negotiation or write BMCR * @phydev: target phy_device struct @@ -1795,23 +1825,7 @@ int __genphy_config_aneg(struct phy_device *phydev, bool changed) else if (err) changed = true; - if (!changed) { - /* Advertisement hasn't changed, but maybe aneg was never on to - * begin with? Or maybe phy was isolated? - */ - int ctl = phy_read(phydev, MII_BMCR); - - if (ctl < 0) - return ctl; - - if (!(ctl & BMCR_ANENABLE) || (ctl & BMCR_ISOLATE)) - changed = true; /* do restart aneg */ - } - - /* Only restart aneg if we are advertising something different - * than we were before. - */ - return changed ? genphy_restart_aneg(phydev) : 0; + return genphy_check_and_restart_aneg(phydev, changed); } EXPORT_SYMBOL(__genphy_config_aneg); diff --git a/include/linux/phy.h b/include/linux/phy.h index 5032d453ac66..1c4f97d2631d 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1094,6 +1094,7 @@ void phy_attached_info(struct phy_device *phydev); int genphy_read_abilities(struct phy_device *phydev); int genphy_setup_forced(struct phy_device *phydev); int genphy_restart_aneg(struct phy_device *phydev); +int genphy_check_and_restart_aneg(struct phy_device *phydev, bool restart); int genphy_config_eee_advert(struct phy_device *phydev); int __genphy_config_aneg(struct phy_device *phydev, bool changed); int genphy_aneg_done(struct phy_device *phydev); -- cgit v1.2.3-71-gd317 From 0efc286a923874f0c243e5766cce54e9429ed949 Mon Sep 17 00:00:00 2001 From: Russell King Date: Tue, 17 Dec 2019 13:39:16 +0000 Subject: net: phy: provide and use genphy_read_status_fixed() There are two drivers and generic code which contain exactly the same code to read the status of a PHY operating without autonegotiation enabled. Rather than duplicate this code, provide a helper to read this information. Signed-off-by: Russell King Reviewed-by: Andrew Lunn Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/phy/lxt.c | 19 +++-------------- drivers/net/phy/marvell.c | 19 ++++------------- drivers/net/phy/phy_device.c | 49 +++++++++++++++++++++++++++++--------------- include/linux/phy.h | 1 + 4 files changed, 41 insertions(+), 47 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c index 30172852e36e..fec58ad69e02 100644 --- a/drivers/net/phy/lxt.c +++ b/drivers/net/phy/lxt.c @@ -192,22 +192,9 @@ static int lxt973a2_read_status(struct phy_device *phydev) phy_resolve_aneg_pause(phydev); } else { - int bmcr = phy_read(phydev, MII_BMCR); - - if (bmcr < 0) - return bmcr; - - if (bmcr & BMCR_FULLDPLX) - phydev->duplex = DUPLEX_FULL; - else - phydev->duplex = DUPLEX_HALF; - - if (bmcr & BMCR_SPEED1000) - phydev->speed = SPEED_1000; - else if (bmcr & BMCR_SPEED100) - phydev->speed = SPEED_100; - else - phydev->speed = SPEED_10; + err = genphy_read_status_fixed(phydev); + if (err < 0) + return err; phydev->pause = phydev->asym_pause = 0; linkmode_zero(phydev->lp_advertising); diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c index d807c3e5cc56..5e49639dc8dd 100644 --- a/drivers/net/phy/marvell.c +++ b/drivers/net/phy/marvell.c @@ -1407,22 +1407,11 @@ static int marvell_read_status_page_an(struct phy_device *phydev, static int marvell_read_status_page_fixed(struct phy_device *phydev) { - int bmcr = phy_read(phydev, MII_BMCR); - - if (bmcr < 0) - return bmcr; - - if (bmcr & BMCR_FULLDPLX) - phydev->duplex = DUPLEX_FULL; - else - phydev->duplex = DUPLEX_HALF; + int err; - if (bmcr & BMCR_SPEED1000) - phydev->speed = SPEED_1000; - else if (bmcr & BMCR_SPEED100) - phydev->speed = SPEED_100; - else - phydev->speed = SPEED_10; + err = genphy_read_status_fixed(phydev); + if (err < 0) + return err; phydev->pause = 0; phydev->asym_pause = 0; diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index c61f2295e21e..ec19efe7ea99 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -1992,6 +1992,36 @@ int genphy_read_lpa(struct phy_device *phydev) } EXPORT_SYMBOL(genphy_read_lpa); +/** + * genphy_read_status_fixed - read the link parameters for !aneg mode + * @phydev: target phy_device struct + * + * Read the current duplex and speed state for a PHY operating with + * autonegotiation disabled. + */ +int genphy_read_status_fixed(struct phy_device *phydev) +{ + int bmcr = phy_read(phydev, MII_BMCR); + + if (bmcr < 0) + return bmcr; + + if (bmcr & BMCR_FULLDPLX) + phydev->duplex = DUPLEX_FULL; + else + phydev->duplex = DUPLEX_HALF; + + if (bmcr & BMCR_SPEED1000) + phydev->speed = SPEED_1000; + else if (bmcr & BMCR_SPEED100) + phydev->speed = SPEED_100; + else + phydev->speed = SPEED_10; + + return 0; +} +EXPORT_SYMBOL(genphy_read_status_fixed); + /** * genphy_read_status - check the link status and update current link state * @phydev: target phy_device struct @@ -2026,22 +2056,9 @@ int genphy_read_status(struct phy_device *phydev) if (phydev->autoneg == AUTONEG_ENABLE && phydev->autoneg_complete) { phy_resolve_aneg_linkmode(phydev); } else if (phydev->autoneg == AUTONEG_DISABLE) { - int bmcr = phy_read(phydev, MII_BMCR); - - if (bmcr < 0) - return bmcr; - - if (bmcr & BMCR_FULLDPLX) - phydev->duplex = DUPLEX_FULL; - else - phydev->duplex = DUPLEX_HALF; - - if (bmcr & BMCR_SPEED1000) - phydev->speed = SPEED_1000; - else if (bmcr & BMCR_SPEED100) - phydev->speed = SPEED_100; - else - phydev->speed = SPEED_10; + err = genphy_read_status_fixed(phydev); + if (err < 0) + return err; } return 0; diff --git a/include/linux/phy.h b/include/linux/phy.h index 1c4f97d2631d..b2105e0d72d3 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1100,6 +1100,7 @@ int __genphy_config_aneg(struct phy_device *phydev, bool changed); int genphy_aneg_done(struct phy_device *phydev); int genphy_update_link(struct phy_device *phydev); int genphy_read_lpa(struct phy_device *phydev); +int genphy_read_status_fixed(struct phy_device *phydev); int genphy_read_status(struct phy_device *phydev); int genphy_suspend(struct phy_device *phydev); int genphy_resume(struct phy_device *phydev); -- cgit v1.2.3-71-gd317 From 8d5a49e9e31ba1ddd34a54b2351d068a90c78707 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Tue, 17 Dec 2019 14:12:01 -0800 Subject: net/tls: add helper for testing if socket is RX offloaded There is currently no way for driver to reliably check that the socket it has looked up is in fact RX offloaded. Add a helper. This allows drivers to catch misbehaving firmware. Signed-off-by: Jakub Kicinski Signed-off-by: David S. Miller --- include/net/tls.h | 9 +++++++++ net/tls/tls_device.c | 5 +++-- 2 files changed, 12 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/net/tls.h b/include/net/tls.h index df630f5fc723..bf9eb4823933 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -641,6 +641,7 @@ int tls_sw_fallback_init(struct sock *sk, #ifdef CONFIG_TLS_DEVICE void tls_device_init(void); void tls_device_cleanup(void); +void tls_device_sk_destruct(struct sock *sk); int tls_set_device_offload(struct sock *sk, struct tls_context *ctx); void tls_device_free_resources_tx(struct sock *sk); int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx); @@ -649,6 +650,14 @@ void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq); void tls_offload_tx_resync_request(struct sock *sk, u32 got_seq, u32 exp_seq); int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx, struct sk_buff *skb, struct strp_msg *rxm); + +static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk) +{ + if (!sk_fullsock(sk) || + smp_load_acquire(&sk->sk_destruct) != tls_device_sk_destruct) + return false; + return tls_get_ctx(sk)->rx_conf == TLS_HW; +} #else static inline void tls_device_init(void) {} static inline void tls_device_cleanup(void) {} diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index cd91ad812291..1ba5a92832bb 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -178,7 +178,7 @@ static void tls_icsk_clean_acked(struct sock *sk, u32 acked_seq) * socket and no in-flight SKBs associated with this * socket, so it is safe to free all the resources. */ -static void tls_device_sk_destruct(struct sock *sk) +void tls_device_sk_destruct(struct sock *sk) { struct tls_context *tls_ctx = tls_get_ctx(sk); struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx); @@ -196,6 +196,7 @@ static void tls_device_sk_destruct(struct sock *sk) if (refcount_dec_and_test(&tls_ctx->refcount)) tls_device_queue_ctx_destruction(tls_ctx); } +EXPORT_SYMBOL_GPL(tls_device_sk_destruct); void tls_device_free_resources_tx(struct sock *sk) { @@ -903,7 +904,7 @@ static void tls_device_attach(struct tls_context *ctx, struct sock *sk, spin_unlock_irq(&tls_device_lock); ctx->sk_destruct = sk->sk_destruct; - sk->sk_destruct = tls_device_sk_destruct; + smp_store_release(&sk->sk_destruct, tls_device_sk_destruct); } } -- cgit v1.2.3-71-gd317 From e312b9e706ed6d94f6cc9088fcd9fbd81de4525c Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Thu, 19 Dec 2019 07:10:02 +0100 Subject: xsk: Make xskmap flush_list common for all map instances MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The xskmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all xskmaps, which simplifies __xsk_map_flush() and xsk_map_alloc(). Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Acked-by: Toke Høiland-Jørgensen Link: https://lore.kernel.org/bpf/20191219061006.21980-5-bjorn.topel@gmail.com --- include/net/xdp_sock.h | 11 ++++------- kernel/bpf/xskmap.c | 18 +++--------------- net/core/filter.c | 9 ++++----- net/xdp/xsk.c | 17 +++++++++-------- 4 files changed, 20 insertions(+), 35 deletions(-) (limited to 'include') diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index e3780e4b74e1..48594740d67c 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -72,7 +72,6 @@ struct xdp_umem { struct xsk_map { struct bpf_map map; - struct list_head __percpu *flush_list; spinlock_t lock; /* Synchronize map updates */ struct xdp_sock *xsk_map[]; }; @@ -139,9 +138,8 @@ void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs, struct xdp_sock **map_entry); int xsk_map_inc(struct xsk_map *map); void xsk_map_put(struct xsk_map *map); -int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs); -void __xsk_map_flush(struct bpf_map *map); +int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp); +void __xsk_map_flush(void); static inline struct xdp_sock *__xsk_map_lookup_elem(struct bpf_map *map, u32 key) @@ -369,13 +367,12 @@ static inline u64 xsk_umem_adjust_offset(struct xdp_umem *umem, u64 handle, return 0; } -static inline int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs) +static inline int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { return -EOPNOTSUPP; } -static inline void __xsk_map_flush(struct bpf_map *map) +static inline void __xsk_map_flush(void) { } diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c index 90c4fce1c981..2cc5c8f4c800 100644 --- a/kernel/bpf/xskmap.c +++ b/kernel/bpf/xskmap.c @@ -72,9 +72,9 @@ static void xsk_map_sock_delete(struct xdp_sock *xs, static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) { struct bpf_map_memory mem; - int cpu, err, numa_node; + int err, numa_node; struct xsk_map *m; - u64 cost, size; + u64 size; if (!capable(CAP_NET_ADMIN)) return ERR_PTR(-EPERM); @@ -86,9 +86,8 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) numa_node = bpf_map_attr_numa_node(attr); size = struct_size(m, xsk_map, attr->max_entries); - cost = size + array_size(sizeof(*m->flush_list), num_possible_cpus()); - err = bpf_map_charge_init(&mem, cost); + err = bpf_map_charge_init(&mem, size); if (err < 0) return ERR_PTR(err); @@ -102,16 +101,6 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) bpf_map_charge_move(&m->map.memory, &mem); spin_lock_init(&m->lock); - m->flush_list = alloc_percpu(struct list_head); - if (!m->flush_list) { - bpf_map_charge_finish(&m->map.memory); - bpf_map_area_free(m); - return ERR_PTR(-ENOMEM); - } - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(m->flush_list, cpu)); - return &m->map; } @@ -121,7 +110,6 @@ static void xsk_map_free(struct bpf_map *map) bpf_clear_redirect_map(map); synchronize_net(); - free_percpu(m->flush_list); bpf_map_area_free(m); } diff --git a/net/core/filter.c b/net/core/filter.c index a411f7835dee..c51678c473c5 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3511,8 +3511,7 @@ err: static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, struct bpf_map *map, - struct xdp_buff *xdp, - u32 index) + struct xdp_buff *xdp) { int err; @@ -3537,7 +3536,7 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, case BPF_MAP_TYPE_XSKMAP: { struct xdp_sock *xs = fwd; - err = __xsk_map_redirect(map, xdp, xs); + err = __xsk_map_redirect(xs, xdp); return err; } default: @@ -3562,7 +3561,7 @@ void xdp_do_flush_map(void) __cpu_map_flush(map); break; case BPF_MAP_TYPE_XSKMAP: - __xsk_map_flush(map); + __xsk_map_flush(); break; default: break; @@ -3619,7 +3618,7 @@ static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, if (ri->map_to_flush && unlikely(ri->map_to_flush != map)) xdp_do_flush_map(); - err = __bpf_tx_xdp_map(dev, fwd, map, xdp, index); + err = __bpf_tx_xdp_map(dev, fwd, map, xdp); if (unlikely(err)) goto err; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 956793893c9d..e45c27f5cfca 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -31,6 +31,8 @@ #define TX_BATCH_SIZE 16 +static DEFINE_PER_CPU(struct list_head, xskmap_flush_list); + bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs) { return READ_ONCE(xs->rx) && READ_ONCE(xs->umem) && @@ -264,11 +266,9 @@ out_unlock: return err; } -int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, - struct xdp_sock *xs) +int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { - struct xsk_map *m = container_of(map, struct xsk_map, map); - struct list_head *flush_list = this_cpu_ptr(m->flush_list); + struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); int err; err = xsk_rcv(xs, xdp); @@ -281,10 +281,9 @@ int __xsk_map_redirect(struct bpf_map *map, struct xdp_buff *xdp, return 0; } -void __xsk_map_flush(struct bpf_map *map) +void __xsk_map_flush(void) { - struct xsk_map *m = container_of(map, struct xsk_map, map); - struct list_head *flush_list = this_cpu_ptr(m->flush_list); + struct list_head *flush_list = this_cpu_ptr(&xskmap_flush_list); struct xdp_sock *xs, *tmp; list_for_each_entry_safe(xs, tmp, flush_list, flush_node) { @@ -1177,7 +1176,7 @@ static struct pernet_operations xsk_net_ops = { static int __init xsk_init(void) { - int err; + int err, cpu; err = proto_register(&xsk_proto, 0 /* no slab */); if (err) @@ -1195,6 +1194,8 @@ static int __init xsk_init(void) if (err) goto out_pernet; + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(xskmap_flush_list, cpu)); return 0; out_pernet: -- cgit v1.2.3-71-gd317 From 96360004b8628541f5d05a845ea213267db0b1a2 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Thu, 19 Dec 2019 07:10:03 +0100 Subject: xdp: Make devmap flush_list common for all map instances MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The devmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __dev_map_flush() and dev_map_init_map(). Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Acked-by: Toke Høiland-Jørgensen Link: https://lore.kernel.org/bpf/20191219061006.21980-6-bjorn.topel@gmail.com --- include/linux/bpf.h | 4 ++-- kernel/bpf/devmap.c | 35 +++++++++++++---------------------- net/core/filter.c | 2 +- 3 files changed, 16 insertions(+), 25 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index d467983e61bb..31191804ca09 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -959,7 +959,7 @@ struct sk_buff; struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key); struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key); -void __dev_map_flush(struct bpf_map *map); +void __dev_map_flush(void); int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, struct net_device *dev_rx); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, @@ -1068,7 +1068,7 @@ static inline struct net_device *__dev_map_hash_lookup_elem(struct bpf_map *map return NULL; } -static inline void __dev_map_flush(struct bpf_map *map) +static inline void __dev_map_flush(void) { } diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index b7595de6a91a..da9c832fc5c8 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -75,7 +75,6 @@ struct bpf_dtab_netdev { struct bpf_dtab { struct bpf_map map; struct bpf_dtab_netdev **netdev_map; /* DEVMAP type only */ - struct list_head __percpu *flush_list; struct list_head list; /* these are only used for DEVMAP_HASH type maps */ @@ -85,6 +84,7 @@ struct bpf_dtab { u32 n_buckets; }; +static DEFINE_PER_CPU(struct list_head, dev_map_flush_list); static DEFINE_SPINLOCK(dev_map_lock); static LIST_HEAD(dev_map_list); @@ -109,8 +109,8 @@ static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab, static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) { - int err, cpu; - u64 cost; + u64 cost = 0; + int err; /* check sanity of attributes */ if (attr->max_entries == 0 || attr->key_size != 4 || @@ -125,9 +125,6 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) bpf_map_init_from_attr(&dtab->map, attr); - /* make sure page count doesn't overflow */ - cost = (u64) sizeof(struct list_head) * num_possible_cpus(); - if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries); @@ -143,17 +140,10 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) if (err) return -EINVAL; - dtab->flush_list = alloc_percpu(struct list_head); - if (!dtab->flush_list) - goto free_charge; - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(dtab->flush_list, cpu)); - if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets); if (!dtab->dev_index_head) - goto free_percpu; + goto free_charge; spin_lock_init(&dtab->index_lock); } else { @@ -161,13 +151,11 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr) sizeof(struct bpf_dtab_netdev *), dtab->map.numa_node); if (!dtab->netdev_map) - goto free_percpu; + goto free_charge; } return 0; -free_percpu: - free_percpu(dtab->flush_list); free_charge: bpf_map_charge_finish(&dtab->map.memory); return -ENOMEM; @@ -254,7 +242,6 @@ static void dev_map_free(struct bpf_map *map) bpf_map_area_free(dtab->netdev_map); } - free_percpu(dtab->flush_list); kfree(dtab); } @@ -384,10 +371,9 @@ error: * net device can be torn down. On devmap tear down we ensure the flush list * is empty before completing to ensure all flush operations have completed. */ -void __dev_map_flush(struct bpf_map *map) +void __dev_map_flush(void) { - struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); - struct list_head *flush_list = this_cpu_ptr(dtab->flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); struct xdp_bulk_queue *bq, *tmp; rcu_read_lock(); @@ -419,7 +405,7 @@ static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf, struct net_device *dev_rx) { - struct list_head *flush_list = this_cpu_ptr(obj->dtab->flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) @@ -777,10 +763,15 @@ static struct notifier_block dev_map_notifier = { static int __init dev_map_init(void) { + int cpu; + /* Assure tracepoint shadow struct _bpf_dtab_netdev is in sync */ BUILD_BUG_ON(offsetof(struct bpf_dtab_netdev, dev) != offsetof(struct _bpf_dtab_netdev, dev)); register_netdevice_notifier(&dev_map_notifier); + + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(dev_map_flush_list, cpu)); return 0; } diff --git a/net/core/filter.c b/net/core/filter.c index c51678c473c5..b7570cb84902 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3555,7 +3555,7 @@ void xdp_do_flush_map(void) switch (map->map_type) { case BPF_MAP_TYPE_DEVMAP: case BPF_MAP_TYPE_DEVMAP_HASH: - __dev_map_flush(map); + __dev_map_flush(); break; case BPF_MAP_TYPE_CPUMAP: __cpu_map_flush(map); -- cgit v1.2.3-71-gd317 From cdfafe98cabefeedbbc65af5c191c59745c03298 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Thu, 19 Dec 2019 07:10:04 +0100 Subject: xdp: Make cpumap flush_list common for all map instances MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The cpumap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __cpu_map_flush() and cpu_map_alloc(). Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Acked-by: Toke Høiland-Jørgensen Link: https://lore.kernel.org/bpf/20191219061006.21980-7-bjorn.topel@gmail.com --- include/linux/bpf.h | 4 ++-- kernel/bpf/cpumap.c | 36 ++++++++++++++++++------------------ net/core/filter.c | 2 +- 3 files changed, 21 insertions(+), 21 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 31191804ca09..8f3e00c84f39 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -966,7 +966,7 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, struct bpf_prog *xdp_prog); struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key); -void __cpu_map_flush(struct bpf_map *map); +void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, struct net_device *dev_rx); @@ -1097,7 +1097,7 @@ struct bpf_cpu_map_entry *__cpu_map_lookup_elem(struct bpf_map *map, u32 key) return NULL; } -static inline void __cpu_map_flush(struct bpf_map *map) +static inline void __cpu_map_flush(void) { } diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index f9deed659798..70f71b154fa5 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -72,17 +72,18 @@ struct bpf_cpu_map { struct bpf_map map; /* Below members specific for map type */ struct bpf_cpu_map_entry **cpu_map; - struct list_head __percpu *flush_list; }; +static DEFINE_PER_CPU(struct list_head, cpu_map_flush_list); + static int bq_flush_to_queue(struct xdp_bulk_queue *bq); static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) { struct bpf_cpu_map *cmap; int err = -ENOMEM; - int ret, cpu; u64 cost; + int ret; if (!capable(CAP_SYS_ADMIN)) return ERR_PTR(-EPERM); @@ -106,7 +107,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) /* make sure page count doesn't overflow */ cost = (u64) cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *); - cost += sizeof(struct list_head) * num_possible_cpus(); /* Notice returns -EPERM on if map size is larger than memlock limit */ ret = bpf_map_charge_init(&cmap->map.memory, cost); @@ -115,23 +115,14 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr) goto free_cmap; } - cmap->flush_list = alloc_percpu(struct list_head); - if (!cmap->flush_list) - goto free_charge; - - for_each_possible_cpu(cpu) - INIT_LIST_HEAD(per_cpu_ptr(cmap->flush_list, cpu)); - /* Alloc array for possible remote "destination" CPUs */ cmap->cpu_map = bpf_map_area_alloc(cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry *), cmap->map.numa_node); if (!cmap->cpu_map) - goto free_percpu; + goto free_charge; return &cmap->map; -free_percpu: - free_percpu(cmap->flush_list); free_charge: bpf_map_charge_finish(&cmap->map.memory); free_cmap: @@ -526,7 +517,6 @@ static void cpu_map_free(struct bpf_map *map) /* bq flush and cleanup happens after RCU grace-period */ __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ } - free_percpu(cmap->flush_list); bpf_map_area_free(cmap->cpu_map); kfree(cmap); } @@ -618,7 +608,7 @@ static int bq_flush_to_queue(struct xdp_bulk_queue *bq) */ static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) { - struct list_head *flush_list = this_cpu_ptr(rcpu->cmap->flush_list); + struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq); if (unlikely(bq->count == CPU_MAP_BULK_SIZE)) @@ -657,10 +647,9 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, return 0; } -void __cpu_map_flush(struct bpf_map *map) +void __cpu_map_flush(void) { - struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); - struct list_head *flush_list = this_cpu_ptr(cmap->flush_list); + struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); struct xdp_bulk_queue *bq, *tmp; list_for_each_entry_safe(bq, tmp, flush_list, flush_node) { @@ -670,3 +659,14 @@ void __cpu_map_flush(struct bpf_map *map) wake_up_process(bq->obj->kthread); } } + +static int __init cpu_map_init(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + INIT_LIST_HEAD(&per_cpu(cpu_map_flush_list, cpu)); + return 0; +} + +subsys_initcall(cpu_map_init); diff --git a/net/core/filter.c b/net/core/filter.c index b7570cb84902..c706325b3e66 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3558,7 +3558,7 @@ void xdp_do_flush_map(void) __dev_map_flush(); break; case BPF_MAP_TYPE_CPUMAP: - __cpu_map_flush(map); + __cpu_map_flush(); break; case BPF_MAP_TYPE_XSKMAP: __xsk_map_flush(); -- cgit v1.2.3-71-gd317 From 332f22a60e4c3492d4953cd6f7aaa4e8bd0bba97 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Thu, 19 Dec 2019 07:10:05 +0100 Subject: xdp: Remove map_to_flush and map swap detection MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Now that all XDP maps that can be used with bpf_redirect_map() tracks entries to be flushed in a global fashion, there is not need to track that the map has changed and flush from xdp_do_generic_map() anymore. All entries will be flushed in xdp_do_flush_map(). This means that the map_to_flush can be removed, and the corresponding checks. Moving the flush logic to one place, xdp_do_flush_map(), give a bulking behavior and performance boost. Signed-off-by: Björn Töpel Signed-off-by: Alexei Starovoitov Acked-by: Toke Høiland-Jørgensen Link: https://lore.kernel.org/bpf/20191219061006.21980-8-bjorn.topel@gmail.com --- include/linux/filter.h | 1 - net/core/filter.c | 27 +++------------------------ 2 files changed, 3 insertions(+), 25 deletions(-) (limited to 'include') diff --git a/include/linux/filter.h b/include/linux/filter.h index 37ac7025031d..69d6706fc889 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -592,7 +592,6 @@ struct bpf_redirect_info { u32 tgt_index; void *tgt_value; struct bpf_map *map; - struct bpf_map *map_to_flush; u32 kern_flags; }; diff --git a/net/core/filter.c b/net/core/filter.c index c706325b3e66..d9caa3e57ea1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3547,26 +3547,9 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, void xdp_do_flush_map(void) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - struct bpf_map *map = ri->map_to_flush; - - ri->map_to_flush = NULL; - if (map) { - switch (map->map_type) { - case BPF_MAP_TYPE_DEVMAP: - case BPF_MAP_TYPE_DEVMAP_HASH: - __dev_map_flush(); - break; - case BPF_MAP_TYPE_CPUMAP: - __cpu_map_flush(); - break; - case BPF_MAP_TYPE_XSKMAP: - __xsk_map_flush(); - break; - default: - break; - } - } + __dev_map_flush(); + __cpu_map_flush(); + __xsk_map_flush(); } EXPORT_SYMBOL_GPL(xdp_do_flush_map); @@ -3615,14 +3598,10 @@ static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_value = NULL; WRITE_ONCE(ri->map, NULL); - if (ri->map_to_flush && unlikely(ri->map_to_flush != map)) - xdp_do_flush_map(); - err = __bpf_tx_xdp_map(dev, fwd, map, xdp); if (unlikely(err)) goto err; - ri->map_to_flush = map; _trace_xdp_redirect_map(dev, xdp_prog, fwd, map, index); return 0; err: -- cgit v1.2.3-71-gd317 From 7dd68b3279f1792103d12e69933db3128c6d416e Mon Sep 17 00:00:00 2001 From: Andrey Ignatov Date: Wed, 18 Dec 2019 23:44:35 -0800 Subject: bpf: Support replacing cgroup-bpf program in MULTI mode The common use-case in production is to have multiple cgroup-bpf programs per attach type that cover multiple use-cases. Such programs are attached with BPF_F_ALLOW_MULTI and can be maintained by different people. Order of programs usually matters, for example imagine two egress programs: the first one drops packets and the second one counts packets. If they're swapped the result of counting program will be different. It brings operational challenges with updating cgroup-bpf program(s) attached with BPF_F_ALLOW_MULTI since there is no way to replace a program: * One way to update is to detach all programs first and then attach the new version(s) again in the right order. This introduces an interruption in the work a program is doing and may not be acceptable (e.g. if it's egress firewall); * Another way is attach the new version of a program first and only then detach the old version. This introduces the time interval when two versions of same program are working, what may not be acceptable if a program is not idempotent. It also imposes additional burden on program developers to make sure that two versions of their program can co-exist. Solve the problem by introducing a "replace" mode in BPF_PROG_ATTACH command for cgroup-bpf programs being attached with BPF_F_ALLOW_MULTI flag. This mode is enabled by newly introduced BPF_F_REPLACE attach flag and bpf_attr.replace_bpf_fd attribute to pass fd of the old program to replace That way user can replace any program among those attached with BPF_F_ALLOW_MULTI flag without the problems described above. Details of the new API: * If BPF_F_REPLACE is set but replace_bpf_fd doesn't have valid descriptor of BPF program, BPF_PROG_ATTACH will return corresponding error (EINVAL or EBADF). * If replace_bpf_fd has valid descriptor of BPF program but such a program is not attached to specified cgroup, BPF_PROG_ATTACH will return ENOENT. BPF_F_REPLACE is introduced to make the user intent clear, since replace_bpf_fd alone can't be used for this (its default value, 0, is a valid fd). BPF_F_REPLACE also makes it possible to extend the API in the future (e.g. add BPF_F_BEFORE and BPF_F_AFTER if needed). Signed-off-by: Andrey Ignatov Signed-off-by: Alexei Starovoitov Acked-by: Martin KaFai Lau Acked-by: Andrii Narkyiko Link: https://lore.kernel.org/bpf/30cd850044a0057bdfcaaf154b7d2f39850ba813.1576741281.git.rdna@fb.com --- include/linux/bpf-cgroup.h | 4 +++- include/uapi/linux/bpf.h | 10 ++++++++++ kernel/bpf/cgroup.c | 30 ++++++++++++++++++++++++++---- kernel/bpf/syscall.c | 4 ++-- kernel/cgroup/cgroup.c | 5 +++-- tools/include/uapi/linux/bpf.h | 10 ++++++++++ 6 files changed, 54 insertions(+), 9 deletions(-) (limited to 'include') diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index 169fd25f6bc2..18f6a6da7c3c 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -85,6 +85,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp); void cgroup_bpf_offline(struct cgroup *cgrp); int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, + struct bpf_prog *replace_prog, enum bpf_attach_type type, u32 flags); int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog, enum bpf_attach_type type); @@ -93,7 +94,8 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr, /* Wrapper for __cgroup_bpf_*() protected by cgroup_mutex */ int cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, - enum bpf_attach_type type, u32 flags); + struct bpf_prog *replace_prog, enum bpf_attach_type type, + u32 flags); int cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog, enum bpf_attach_type type, u32 flags); int cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index dbbcf0b02970..7df436da542d 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -231,6 +231,11 @@ enum bpf_attach_type { * When children program makes decision (like picking TCP CA or sock bind) * parent program has a chance to override it. * + * With BPF_F_ALLOW_MULTI a new program is added to the end of the list of + * programs for a cgroup. Though it's possible to replace an old program at + * any position by also specifying BPF_F_REPLACE flag and position itself in + * replace_bpf_fd attribute. Old program at this position will be released. + * * A cgroup with MULTI or OVERRIDE flag allows any attach flags in sub-cgroups. * A cgroup with NONE doesn't allow any programs in sub-cgroups. * Ex1: @@ -249,6 +254,7 @@ enum bpf_attach_type { */ #define BPF_F_ALLOW_OVERRIDE (1U << 0) #define BPF_F_ALLOW_MULTI (1U << 1) +#define BPF_F_REPLACE (1U << 2) /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the * verifier will perform strict alignment checking as if the kernel @@ -442,6 +448,10 @@ union bpf_attr { __u32 attach_bpf_fd; /* eBPF program to attach */ __u32 attach_type; __u32 attach_flags; + __u32 replace_bpf_fd; /* previously attached eBPF + * program to replace if + * BPF_F_REPLACE is used + */ }; struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */ diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 283efe3ce052..45346c79613a 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -282,14 +282,17 @@ cleanup: * propagate the change to descendants * @cgrp: The cgroup which descendants to traverse * @prog: A program to attach + * @replace_prog: Previously attached program to replace if BPF_F_REPLACE is set * @type: Type of attach operation * @flags: Option flags * * Must be called with cgroup_mutex held. */ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, + struct bpf_prog *replace_prog, enum bpf_attach_type type, u32 flags) { + u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)); struct list_head *progs = &cgrp->bpf.progs[type]; struct bpf_prog *old_prog = NULL; struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE], @@ -298,14 +301,15 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, enum bpf_cgroup_storage_type stype; int err; - if ((flags & BPF_F_ALLOW_OVERRIDE) && (flags & BPF_F_ALLOW_MULTI)) + if (((flags & BPF_F_ALLOW_OVERRIDE) && (flags & BPF_F_ALLOW_MULTI)) || + ((flags & BPF_F_REPLACE) && !(flags & BPF_F_ALLOW_MULTI))) /* invalid combination */ return -EINVAL; if (!hierarchy_allows_attach(cgrp, type)) return -EPERM; - if (!list_empty(progs) && cgrp->bpf.flags[type] != flags) + if (!list_empty(progs) && cgrp->bpf.flags[type] != saved_flags) /* Disallow attaching non-overridable on top * of existing overridable in this cgroup. * Disallow attaching multi-prog if overridable or none @@ -320,7 +324,12 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, if (pl->prog == prog) /* disallow attaching the same prog twice */ return -EINVAL; + if (pl->prog == replace_prog) + replace_pl = pl; } + if ((flags & BPF_F_REPLACE) && !replace_pl) + /* prog to replace not found for cgroup */ + return -ENOENT; } else if (!list_empty(progs)) { replace_pl = list_first_entry(progs, typeof(*pl), node); } @@ -356,7 +365,7 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, for_each_cgroup_storage_type(stype) pl->storage[stype] = storage[stype]; - cgrp->bpf.flags[type] = flags; + cgrp->bpf.flags[type] = saved_flags; err = update_effective_progs(cgrp, type); if (err) @@ -522,6 +531,7 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr, int cgroup_bpf_prog_attach(const union bpf_attr *attr, enum bpf_prog_type ptype, struct bpf_prog *prog) { + struct bpf_prog *replace_prog = NULL; struct cgroup *cgrp; int ret; @@ -529,8 +539,20 @@ int cgroup_bpf_prog_attach(const union bpf_attr *attr, if (IS_ERR(cgrp)) return PTR_ERR(cgrp); - ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type, + if ((attr->attach_flags & BPF_F_ALLOW_MULTI) && + (attr->attach_flags & BPF_F_REPLACE)) { + replace_prog = bpf_prog_get_type(attr->replace_bpf_fd, ptype); + if (IS_ERR(replace_prog)) { + cgroup_put(cgrp); + return PTR_ERR(replace_prog); + } + } + + ret = cgroup_bpf_attach(cgrp, prog, replace_prog, attr->attach_type, attr->attach_flags); + + if (replace_prog) + bpf_prog_put(replace_prog); cgroup_put(cgrp); return ret; } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index b08c362f4e02..81ee8595dfee 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2073,10 +2073,10 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, } } -#define BPF_PROG_ATTACH_LAST_FIELD attach_flags +#define BPF_PROG_ATTACH_LAST_FIELD replace_bpf_fd #define BPF_F_ATTACH_MASK \ - (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI) + (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI | BPF_F_REPLACE) static int bpf_prog_attach(const union bpf_attr *attr) { diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 735af8f15f95..725365df066d 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -6288,12 +6288,13 @@ void cgroup_sk_free(struct sock_cgroup_data *skcd) #ifdef CONFIG_CGROUP_BPF int cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, - enum bpf_attach_type type, u32 flags) + struct bpf_prog *replace_prog, enum bpf_attach_type type, + u32 flags) { int ret; mutex_lock(&cgroup_mutex); - ret = __cgroup_bpf_attach(cgrp, prog, type, flags); + ret = __cgroup_bpf_attach(cgrp, prog, replace_prog, type, flags); mutex_unlock(&cgroup_mutex); return ret; } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index dbbcf0b02970..7df436da542d 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -231,6 +231,11 @@ enum bpf_attach_type { * When children program makes decision (like picking TCP CA or sock bind) * parent program has a chance to override it. * + * With BPF_F_ALLOW_MULTI a new program is added to the end of the list of + * programs for a cgroup. Though it's possible to replace an old program at + * any position by also specifying BPF_F_REPLACE flag and position itself in + * replace_bpf_fd attribute. Old program at this position will be released. + * * A cgroup with MULTI or OVERRIDE flag allows any attach flags in sub-cgroups. * A cgroup with NONE doesn't allow any programs in sub-cgroups. * Ex1: @@ -249,6 +254,7 @@ enum bpf_attach_type { */ #define BPF_F_ALLOW_OVERRIDE (1U << 0) #define BPF_F_ALLOW_MULTI (1U << 1) +#define BPF_F_REPLACE (1U << 2) /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the * verifier will perform strict alignment checking as if the kernel @@ -442,6 +448,10 @@ union bpf_attr { __u32 attach_bpf_fd; /* eBPF program to attach */ __u32 attach_type; __u32 attach_flags; + __u32 replace_bpf_fd; /* previously attached eBPF + * program to replace if + * BPF_F_REPLACE is used + */ }; struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */ -- cgit v1.2.3-71-gd317 From 03896ef1f0cb23d2742ddf486c531c700a2da7d6 Mon Sep 17 00:00:00 2001 From: Magnus Karlsson Date: Thu, 19 Dec 2019 13:39:27 +0100 Subject: xsk: Change names of validation functions Change the names of the validation functions to better reflect what they are doing. The uppermost ones are reading entries from the rings and only the bottom ones validate entries. So xskq_cons_read_ is a better prefix name. Also change the xskq_cons_read_ functions to return a bool as the the descriptor or address is already returned by reference in the parameters. Everyone is using the return value as a bool anyway. Signed-off-by: Magnus Karlsson Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/1576759171-28550-9-git-send-email-magnus.karlsson@intel.com --- include/net/xdp_sock.h | 4 ++-- net/xdp/xsk.c | 4 ++-- net/xdp/xsk_queue.h | 59 ++++++++++++++++++++++++++------------------------ 3 files changed, 35 insertions(+), 32 deletions(-) (limited to 'include') diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index 48594740d67c..63f005830866 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -118,7 +118,7 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp); bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs); /* Used from netdev driver */ bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt); -u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr); +bool xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr); void xsk_umem_discard_addr(struct xdp_umem *umem); void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries); bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc); @@ -197,7 +197,7 @@ static inline bool xsk_umem_has_addrs_rq(struct xdp_umem *umem, u32 cnt) return xsk_umem_has_addrs(umem, cnt - rq->length); } -static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr) +static inline bool xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr) { struct xdp_umem_fq_reuse *rq = umem->fq_reuse; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index cd84b9d0f1e1..4e932a930b3a 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -45,7 +45,7 @@ bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt) } EXPORT_SYMBOL(xsk_umem_has_addrs); -u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr) +bool xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr) { return xskq_cons_peek_addr(umem->fq, addr, umem); } @@ -126,7 +126,7 @@ static void __xsk_rcv_memcpy(struct xdp_umem *umem, u64 addr, void *from_buf, void *to_buf = xdp_umem_get_data(umem, addr); addr = xsk_umem_add_offset_to_addr(addr); - if (xskq_crosses_non_contig_pg(umem, addr, len + metalen)) { + if (xskq_cons_crosses_non_contig_pg(umem, addr, len + metalen)) { void *next_pg_addr = umem->pages[(addr >> PAGE_SHIFT) + 1].addr; u64 page_start = addr & ~(PAGE_SIZE - 1); u64 first_len = PAGE_SIZE - (addr - page_start); diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index 1436116767ea..6d04a96dd092 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -136,8 +136,9 @@ static inline bool xskq_cons_has_entries(struct xsk_queue *q, u32 cnt) /* UMEM queue */ -static inline bool xskq_crosses_non_contig_pg(struct xdp_umem *umem, u64 addr, - u64 length) +static inline bool xskq_cons_crosses_non_contig_pg(struct xdp_umem *umem, + u64 addr, + u64 length) { bool cross_pg = (addr & (PAGE_SIZE - 1)) + length > PAGE_SIZE; bool next_pg_contig = @@ -147,7 +148,7 @@ static inline bool xskq_crosses_non_contig_pg(struct xdp_umem *umem, u64 addr, return cross_pg && !next_pg_contig; } -static inline bool xskq_is_valid_addr(struct xsk_queue *q, u64 addr) +static inline bool xskq_cons_is_valid_addr(struct xsk_queue *q, u64 addr) { if (addr >= q->size) { q->invalid_descs++; @@ -157,7 +158,8 @@ static inline bool xskq_is_valid_addr(struct xsk_queue *q, u64 addr) return true; } -static inline bool xskq_is_valid_addr_unaligned(struct xsk_queue *q, u64 addr, +static inline bool xskq_cons_is_valid_unaligned(struct xsk_queue *q, + u64 addr, u64 length, struct xdp_umem *umem) { @@ -165,7 +167,7 @@ static inline bool xskq_is_valid_addr_unaligned(struct xsk_queue *q, u64 addr, addr = xsk_umem_add_offset_to_addr(addr); if (base_addr >= q->size || addr >= q->size || - xskq_crosses_non_contig_pg(umem, addr, length)) { + xskq_cons_crosses_non_contig_pg(umem, addr, length)) { q->invalid_descs++; return false; } @@ -173,8 +175,8 @@ static inline bool xskq_is_valid_addr_unaligned(struct xsk_queue *q, u64 addr, return true; } -static inline u64 *xskq_validate_addr(struct xsk_queue *q, u64 *addr, - struct xdp_umem *umem) +static inline bool xskq_cons_read_addr(struct xsk_queue *q, u64 *addr, + struct xdp_umem *umem) { struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring; @@ -184,29 +186,29 @@ static inline u64 *xskq_validate_addr(struct xsk_queue *q, u64 *addr, *addr = READ_ONCE(ring->desc[idx]) & q->chunk_mask; if (umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG) { - if (xskq_is_valid_addr_unaligned(q, *addr, + if (xskq_cons_is_valid_unaligned(q, *addr, umem->chunk_size_nohr, umem)) - return addr; + return true; goto out; } - if (xskq_is_valid_addr(q, *addr)) - return addr; + if (xskq_cons_is_valid_addr(q, *addr)) + return true; out: q->cached_cons++; } - return NULL; + return false; } -static inline u64 *xskq_cons_peek_addr(struct xsk_queue *q, u64 *addr, +static inline bool xskq_cons_peek_addr(struct xsk_queue *q, u64 *addr, struct xdp_umem *umem) { if (q->cached_prod == q->cached_cons) xskq_cons_get_entries(q); - return xskq_validate_addr(q, addr, umem); + return xskq_cons_read_addr(q, addr, umem); } static inline void xskq_cons_release(struct xsk_queue *q) @@ -270,11 +272,12 @@ static inline void xskq_prod_submit_n(struct xsk_queue *q, u32 nb_entries) /* Rx/Tx queue */ -static inline bool xskq_is_valid_desc(struct xsk_queue *q, struct xdp_desc *d, - struct xdp_umem *umem) +static inline bool xskq_cons_is_valid_desc(struct xsk_queue *q, + struct xdp_desc *d, + struct xdp_umem *umem) { if (umem->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG) { - if (!xskq_is_valid_addr_unaligned(q, d->addr, d->len, umem)) + if (!xskq_cons_is_valid_unaligned(q, d->addr, d->len, umem)) return false; if (d->len > umem->chunk_size_nohr || d->options) { @@ -285,7 +288,7 @@ static inline bool xskq_is_valid_desc(struct xsk_queue *q, struct xdp_desc *d, return true; } - if (!xskq_is_valid_addr(q, d->addr)) + if (!xskq_cons_is_valid_addr(q, d->addr)) return false; if (((d->addr + d->len) & q->chunk_mask) != (d->addr & q->chunk_mask) || @@ -297,31 +300,31 @@ static inline bool xskq_is_valid_desc(struct xsk_queue *q, struct xdp_desc *d, return true; } -static inline struct xdp_desc *xskq_validate_desc(struct xsk_queue *q, - struct xdp_desc *desc, - struct xdp_umem *umem) +static inline bool xskq_cons_read_desc(struct xsk_queue *q, + struct xdp_desc *desc, + struct xdp_umem *umem) { while (q->cached_cons != q->cached_prod) { struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring; u32 idx = q->cached_cons & q->ring_mask; *desc = READ_ONCE(ring->desc[idx]); - if (xskq_is_valid_desc(q, desc, umem)) - return desc; + if (xskq_cons_is_valid_desc(q, desc, umem)) + return true; q->cached_cons++; } - return NULL; + return false; } -static inline struct xdp_desc *xskq_cons_peek_desc(struct xsk_queue *q, - struct xdp_desc *desc, - struct xdp_umem *umem) +static inline bool xskq_cons_peek_desc(struct xsk_queue *q, + struct xdp_desc *desc, + struct xdp_umem *umem) { if (q->cached_prod == q->cached_cons) xskq_cons_get_entries(q); - return xskq_validate_desc(q, desc, umem); + return xskq_cons_read_desc(q, desc, umem); } static inline int xskq_prod_reserve_desc(struct xsk_queue *q, -- cgit v1.2.3-71-gd317 From f8509aa078de0842ec1817e8026e58620cd05d3b Mon Sep 17 00:00:00 2001 From: Magnus Karlsson Date: Thu, 19 Dec 2019 13:39:28 +0100 Subject: xsk: ixgbe: i40e: ice: mlx5: Xsk_umem_discard_addr to xsk_umem_release_addr Change the name of xsk_umem_discard_addr to xsk_umem_release_addr to better reflect the new naming of the AF_XDP queue manipulation functions. As this functions is used by drivers implementing support for AF_XDP zero-copy, it requires a name change to these drivers. The function xsk_umem_release_addr_rq has also changed name in the same fashion. Signed-off-by: Magnus Karlsson Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/1576759171-28550-10-git-send-email-magnus.karlsson@intel.com --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_xsk.c | 4 ++-- drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 4 ++-- drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c | 2 +- include/net/xdp_sock.h | 10 +++++----- net/xdp/xsk.c | 4 ++-- 6 files changed, 14 insertions(+), 14 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index d07e1a890428..20d6f11a3468 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -269,7 +269,7 @@ static bool i40e_alloc_buffer_zc(struct i40e_ring *rx_ring, bi->handle = xsk_umem_adjust_offset(umem, handle, umem->headroom); - xsk_umem_discard_addr(umem); + xsk_umem_release_addr(umem); return true; } @@ -306,7 +306,7 @@ static bool i40e_alloc_buffer_slow_zc(struct i40e_ring *rx_ring, bi->handle = xsk_umem_adjust_offset(umem, handle, umem->headroom); - xsk_umem_discard_addr_rq(umem); + xsk_umem_release_addr_rq(umem); return true; } diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index cf9b8b22d24f..0af1f0b951c0 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -555,7 +555,7 @@ ice_alloc_buf_fast_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) rx_buf->handle = handle + umem->headroom; - xsk_umem_discard_addr(umem); + xsk_umem_release_addr(umem); return true; } @@ -591,7 +591,7 @@ ice_alloc_buf_slow_zc(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) rx_buf->handle = handle + umem->headroom; - xsk_umem_discard_addr_rq(umem); + xsk_umem_release_addr_rq(umem); return true; } diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c index d6feaacfbf89..1501d0a22078 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c @@ -277,7 +277,7 @@ static bool ixgbe_alloc_buffer_zc(struct ixgbe_ring *rx_ring, bi->handle = xsk_umem_adjust_offset(umem, handle, umem->headroom); - xsk_umem_discard_addr(umem); + xsk_umem_release_addr(umem); return true; } @@ -304,7 +304,7 @@ static bool ixgbe_alloc_buffer_slow_zc(struct ixgbe_ring *rx_ring, bi->handle = xsk_umem_adjust_offset(umem, handle, umem->headroom); - xsk_umem_discard_addr_rq(umem); + xsk_umem_release_addr_rq(umem); return true; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 475b6bd5d29b..62fc8a128a8d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -35,7 +35,7 @@ int mlx5e_xsk_page_alloc_umem(struct mlx5e_rq *rq, */ dma_info->addr = xdp_umem_get_dma(umem, handle); - xsk_umem_discard_addr_rq(umem); + xsk_umem_release_addr_rq(umem); dma_sync_single_for_device(rq->pdev, dma_info->addr, PAGE_SIZE, DMA_BIDIRECTIONAL); diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index 63f005830866..e86ec48ef627 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -119,7 +119,7 @@ bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs); /* Used from netdev driver */ bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt); bool xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr); -void xsk_umem_discard_addr(struct xdp_umem *umem); +void xsk_umem_release_addr(struct xdp_umem *umem); void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries); bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc); void xsk_umem_consume_tx_done(struct xdp_umem *umem); @@ -208,12 +208,12 @@ static inline bool xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr) return addr; } -static inline void xsk_umem_discard_addr_rq(struct xdp_umem *umem) +static inline void xsk_umem_release_addr_rq(struct xdp_umem *umem) { struct xdp_umem_fq_reuse *rq = umem->fq_reuse; if (!rq->length) - xsk_umem_discard_addr(umem); + xsk_umem_release_addr(umem); else rq->length--; } @@ -258,7 +258,7 @@ static inline u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr) return NULL; } -static inline void xsk_umem_discard_addr(struct xdp_umem *umem) +static inline void xsk_umem_release_addr(struct xdp_umem *umem) { } @@ -332,7 +332,7 @@ static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr) return NULL; } -static inline void xsk_umem_discard_addr_rq(struct xdp_umem *umem) +static inline void xsk_umem_release_addr_rq(struct xdp_umem *umem) { } diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 4e932a930b3a..55092feeaf36 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -51,11 +51,11 @@ bool xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr) } EXPORT_SYMBOL(xsk_umem_peek_addr); -void xsk_umem_discard_addr(struct xdp_umem *umem) +void xsk_umem_release_addr(struct xdp_umem *umem) { xskq_cons_release(umem->fq); } -EXPORT_SYMBOL(xsk_umem_discard_addr); +EXPORT_SYMBOL(xsk_umem_release_addr); void xsk_set_rx_need_wakeup(struct xdp_umem *umem) { -- cgit v1.2.3-71-gd317 From 48fda74f0a9377ce2145ac5f06b6db0274880826 Mon Sep 17 00:00:00 2001 From: Oleksij Rempel Date: Wed, 18 Dec 2019 09:02:14 +0100 Subject: net: dsa: add support for Atheros AR9331 TAG format Add support for tag format used in Atheros AR9331 built-in switch. Reviewed-by: Vivien Didelot Reviewed-by: Andrew Lunn Signed-off-by: Oleksij Rempel Signed-off-by: David S. Miller --- include/net/dsa.h | 2 ++ net/dsa/Kconfig | 6 ++++ net/dsa/Makefile | 1 + net/dsa/tag_ar9331.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 105 insertions(+) create mode 100644 net/dsa/tag_ar9331.c (limited to 'include') diff --git a/include/net/dsa.h b/include/net/dsa.h index 6767dc3f66c0..da5578db228e 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -43,6 +43,7 @@ struct phylink_link_state; #define DSA_TAG_PROTO_SJA1105_VALUE 13 #define DSA_TAG_PROTO_KSZ8795_VALUE 14 #define DSA_TAG_PROTO_OCELOT_VALUE 15 +#define DSA_TAG_PROTO_AR9331_VALUE 16 enum dsa_tag_protocol { DSA_TAG_PROTO_NONE = DSA_TAG_PROTO_NONE_VALUE, @@ -61,6 +62,7 @@ enum dsa_tag_protocol { DSA_TAG_PROTO_SJA1105 = DSA_TAG_PROTO_SJA1105_VALUE, DSA_TAG_PROTO_KSZ8795 = DSA_TAG_PROTO_KSZ8795_VALUE, DSA_TAG_PROTO_OCELOT = DSA_TAG_PROTO_OCELOT_VALUE, + DSA_TAG_PROTO_AR9331 = DSA_TAG_PROTO_AR9331_VALUE, }; struct packet_type; diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig index 1e6c3cac11e6..92663dcb3aa2 100644 --- a/net/dsa/Kconfig +++ b/net/dsa/Kconfig @@ -29,6 +29,12 @@ config NET_DSA_TAG_8021Q Drivers which use these helpers should select this as dependency. +config NET_DSA_TAG_AR9331 + tristate "Tag driver for Atheros AR9331 SoC with built-in switch" + help + Say Y or M if you want to enable support for tagging frames for + the Atheros AR9331 SoC with built-in switch. + config NET_DSA_TAG_BRCM_COMMON tristate default n diff --git a/net/dsa/Makefile b/net/dsa/Makefile index 9a482c38bdb1..108486cfdeef 100644 --- a/net/dsa/Makefile +++ b/net/dsa/Makefile @@ -5,6 +5,7 @@ dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o # tagging formats obj-$(CONFIG_NET_DSA_TAG_8021Q) += tag_8021q.o +obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o obj-$(CONFIG_NET_DSA_TAG_BRCM_COMMON) += tag_brcm.o obj-$(CONFIG_NET_DSA_TAG_DSA) += tag_dsa.o obj-$(CONFIG_NET_DSA_TAG_EDSA) += tag_edsa.o diff --git a/net/dsa/tag_ar9331.c b/net/dsa/tag_ar9331.c new file mode 100644 index 000000000000..466ffa92a474 --- /dev/null +++ b/net/dsa/tag_ar9331.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2019 Pengutronix, Oleksij Rempel + */ + + +#include +#include + +#include "dsa_priv.h" + +#define AR9331_HDR_LEN 2 +#define AR9331_HDR_VERSION 1 + +#define AR9331_HDR_VERSION_MASK GENMASK(15, 14) +#define AR9331_HDR_PRIORITY_MASK GENMASK(13, 12) +#define AR9331_HDR_TYPE_MASK GENMASK(10, 8) +#define AR9331_HDR_BROADCAST BIT(7) +#define AR9331_HDR_FROM_CPU BIT(6) +/* AR9331_HDR_RESERVED - not used or may be version field. + * According to the AR8216 doc it should 0b10. On AR9331 it is 0b11 on RX path + * and should be set to 0b11 to make it work. + */ +#define AR9331_HDR_RESERVED_MASK GENMASK(5, 4) +#define AR9331_HDR_PORT_NUM_MASK GENMASK(3, 0) + +static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb, + struct net_device *dev) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + __le16 *phdr; + u16 hdr; + + if (skb_cow_head(skb, 0) < 0) + return NULL; + + phdr = skb_push(skb, AR9331_HDR_LEN); + + hdr = FIELD_PREP(AR9331_HDR_VERSION_MASK, AR9331_HDR_VERSION); + hdr |= AR9331_HDR_FROM_CPU | dp->index; + /* 0b10 for AR8216 and 0b11 for AR9331 */ + hdr |= AR9331_HDR_RESERVED_MASK; + + phdr[0] = cpu_to_le16(hdr); + + return skb; +} + +static struct sk_buff *ar9331_tag_rcv(struct sk_buff *skb, + struct net_device *ndev, + struct packet_type *pt) +{ + u8 ver, port; + u16 hdr; + + if (unlikely(!pskb_may_pull(skb, AR9331_HDR_LEN))) + return NULL; + + hdr = le16_to_cpu(*(__le16 *)skb_mac_header(skb)); + + ver = FIELD_GET(AR9331_HDR_VERSION_MASK, hdr); + if (unlikely(ver != AR9331_HDR_VERSION)) { + netdev_warn_once(ndev, "%s:%i wrong header version 0x%2x\n", + __func__, __LINE__, hdr); + return NULL; + } + + if (unlikely(hdr & AR9331_HDR_FROM_CPU)) { + netdev_warn_once(ndev, "%s:%i packet should not be from cpu 0x%2x\n", + __func__, __LINE__, hdr); + return NULL; + } + + skb_pull_rcsum(skb, AR9331_HDR_LEN); + + /* Get source port information */ + port = FIELD_GET(AR9331_HDR_PORT_NUM_MASK, hdr); + + skb->dev = dsa_master_find_slave(ndev, 0, port); + if (!skb->dev) + return NULL; + + return skb; +} + +static const struct dsa_device_ops ar9331_netdev_ops = { + .name = "ar9331", + .proto = DSA_TAG_PROTO_AR9331, + .xmit = ar9331_tag_xmit, + .rcv = ar9331_tag_rcv, + .overhead = AR9331_HDR_LEN, +}; + +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_AR9331); +module_dsa_tag_driver(ar9331_netdev_ops); -- cgit v1.2.3-71-gd317 From e1b5e598e5a51b453328879682b178b4acc15105 Mon Sep 17 00:00:00 2001 From: John Rutherford Date: Thu, 19 Dec 2019 16:03:57 +1100 Subject: tipc: make legacy address flag readable over netlink To enable iproute2/tipc to generate backwards compatible printouts and validate command parameters for nodes using a node address, it needs to be able to read the legacy address flag from the kernel. The legacy address flag records the way in which the node identity was originally specified. The legacy address flag is requested by the netlink message TIPC_NL_ADDR_LEGACY_GET. If the flag is set the attribute TIPC_NLA_NET_ADDR_LEGACY is set in the return message. Signed-off-by: John Rutherford Acked-by: Jon Maloy Signed-off-by: David S. Miller --- include/uapi/linux/tipc_netlink.h | 2 ++ net/tipc/net.c | 56 +++++++++++++++++++++++++++++++++++++++ net/tipc/net.h | 1 + net/tipc/netlink.c | 6 +++++ 4 files changed, 65 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/tipc_netlink.h b/include/uapi/linux/tipc_netlink.h index 6c2194ab745b..dc0d23a50e69 100644 --- a/include/uapi/linux/tipc_netlink.h +++ b/include/uapi/linux/tipc_netlink.h @@ -65,6 +65,7 @@ enum { TIPC_NL_UDP_GET_REMOTEIP, TIPC_NL_KEY_SET, TIPC_NL_KEY_FLUSH, + TIPC_NL_ADDR_LEGACY_GET, __TIPC_NL_CMD_MAX, TIPC_NL_CMD_MAX = __TIPC_NL_CMD_MAX - 1 @@ -176,6 +177,7 @@ enum { TIPC_NLA_NET_ADDR, /* u32 */ TIPC_NLA_NET_NODEID, /* u64 */ TIPC_NLA_NET_NODEID_W1, /* u64 */ + TIPC_NLA_NET_ADDR_LEGACY, /* flag */ __TIPC_NLA_NET_MAX, TIPC_NLA_NET_MAX = __TIPC_NLA_NET_MAX - 1 diff --git a/net/tipc/net.c b/net/tipc/net.c index 2de3cec9929d..85400e4242de 100644 --- a/net/tipc/net.c +++ b/net/tipc/net.c @@ -302,3 +302,59 @@ int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info) return err; } + +static int __tipc_nl_addr_legacy_get(struct net *net, struct tipc_nl_msg *msg) +{ + struct tipc_net *tn = tipc_net(net); + struct nlattr *attrs; + void *hdr; + + hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family, + 0, TIPC_NL_ADDR_LEGACY_GET); + if (!hdr) + return -EMSGSIZE; + + attrs = nla_nest_start(msg->skb, TIPC_NLA_NET); + if (!attrs) + goto msg_full; + + if (tn->legacy_addr_format) + if (nla_put_flag(msg->skb, TIPC_NLA_NET_ADDR_LEGACY)) + goto attr_msg_full; + + nla_nest_end(msg->skb, attrs); + genlmsg_end(msg->skb, hdr); + + return 0; + +attr_msg_full: + nla_nest_cancel(msg->skb, attrs); +msg_full: + genlmsg_cancel(msg->skb, hdr); + + return -EMSGSIZE; +} + +int tipc_nl_net_addr_legacy_get(struct sk_buff *skb, struct genl_info *info) +{ + struct net *net = sock_net(skb->sk); + struct tipc_nl_msg msg; + struct sk_buff *rep; + int err; + + rep = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); + if (!rep) + return -ENOMEM; + + msg.skb = rep; + msg.portid = info->snd_portid; + msg.seq = info->snd_seq; + + err = __tipc_nl_addr_legacy_get(net, &msg); + if (err) { + nlmsg_free(msg.skb); + return err; + } + + return genlmsg_reply(msg.skb, info); +} diff --git a/net/tipc/net.h b/net/tipc/net.h index b7f2e364eb99..6740d97c706e 100644 --- a/net/tipc/net.h +++ b/net/tipc/net.h @@ -47,5 +47,6 @@ void tipc_net_stop(struct net *net); int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb); int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info); int __tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info); +int tipc_nl_net_addr_legacy_get(struct sk_buff *skb, struct genl_info *info); #endif diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c index e53231bd23b4..7c35094c20b8 100644 --- a/net/tipc/netlink.c +++ b/net/tipc/netlink.c @@ -83,6 +83,7 @@ const struct nla_policy tipc_nl_net_policy[TIPC_NLA_NET_MAX + 1] = { [TIPC_NLA_NET_ADDR] = { .type = NLA_U32 }, [TIPC_NLA_NET_NODEID] = { .type = NLA_U64 }, [TIPC_NLA_NET_NODEID_W1] = { .type = NLA_U64 }, + [TIPC_NLA_NET_ADDR_LEGACY] = { .type = NLA_FLAG } }; const struct nla_policy tipc_nl_link_policy[TIPC_NLA_LINK_MAX + 1] = { @@ -273,6 +274,11 @@ static const struct genl_ops tipc_genl_v2_ops[] = { .doit = tipc_nl_node_flush_key, }, #endif + { + .cmd = TIPC_NL_ADDR_LEGACY_GET, + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit = tipc_nl_net_addr_legacy_get, + }, }; struct genl_family tipc_genl_family __ro_after_init = { -- cgit v1.2.3-71-gd317 From f66b53fdbb22ced1a323b22b9de84a61aacd8d18 Mon Sep 17 00:00:00 2001 From: Martin Varghese Date: Sat, 21 Dec 2019 08:50:46 +0530 Subject: openvswitch: New MPLS actions for layer 2 tunnelling The existing PUSH MPLS action inserts MPLS header between ethernet header and the IP header. Though this behaviour is fine for L3 VPN where an IP packet is encapsulated inside a MPLS tunnel, it does not suffice the L2 VPN (l2 tunnelling) requirements. In L2 VPN the MPLS header should encapsulate the ethernet packet. The new mpls action ADD_MPLS inserts MPLS header at the start of the packet or at the start of the l3 header depending on the value of l3 tunnel flag in the ADD_MPLS arguments. POP_MPLS action is extended to support ethertype 0x6558. Signed-off-by: Martin Varghese Acked-by: Pravin B Shelar Signed-off-by: David S. Miller --- include/uapi/linux/openvswitch.h | 31 +++++++++++++++++++++++++++++++ net/openvswitch/actions.c | 30 ++++++++++++++++++++++++------ net/openvswitch/flow_netlink.c | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 89 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/openvswitch.h b/include/uapi/linux/openvswitch.h index a87b44cd5590..ae2bff14e7e1 100644 --- a/include/uapi/linux/openvswitch.h +++ b/include/uapi/linux/openvswitch.h @@ -672,6 +672,32 @@ struct ovs_action_push_mpls { __be16 mpls_ethertype; /* Either %ETH_P_MPLS_UC or %ETH_P_MPLS_MC */ }; +/** + * struct ovs_action_add_mpls - %OVS_ACTION_ATTR_ADD_MPLS action + * argument. + * @mpls_lse: MPLS label stack entry to push. + * @mpls_ethertype: Ethertype to set in the encapsulating ethernet frame. + * @tun_flags: MPLS tunnel attributes. + * + * The only values @mpls_ethertype should ever be given are %ETH_P_MPLS_UC and + * %ETH_P_MPLS_MC, indicating MPLS unicast or multicast. Other are rejected. + */ +struct ovs_action_add_mpls { + __be32 mpls_lse; + __be16 mpls_ethertype; /* Either %ETH_P_MPLS_UC or %ETH_P_MPLS_MC */ + __u16 tun_flags; +}; + +#define OVS_MPLS_L3_TUNNEL_FLAG_MASK (1 << 0) /* Flag to specify the place of + * insertion of MPLS header. + * When false, the MPLS header + * will be inserted at the start + * of the packet. + * When true, the MPLS header + * will be inserted at the start + * of the l3 header. + */ + /** * struct ovs_action_push_vlan - %OVS_ACTION_ATTR_PUSH_VLAN action argument. * @vlan_tpid: Tag protocol identifier (TPID) to push. @@ -892,6 +918,10 @@ struct check_pkt_len_arg { * @OVS_ACTION_ATTR_CHECK_PKT_LEN: Check the packet length and execute a set * of actions if greater than the specified packet length, else execute * another set of actions. + * @OVS_ACTION_ATTR_ADD_MPLS: Push a new MPLS label stack entry at the + * start of the packet or at the start of the l3 header depending on the value + * of l3 tunnel flag in the tun_flags field of OVS_ACTION_ATTR_ADD_MPLS + * argument. * * Only a single header can be set with a single %OVS_ACTION_ATTR_SET. Not all * fields within a header are modifiable, e.g. the IPv4 protocol and fragment @@ -927,6 +957,7 @@ enum ovs_action_attr { OVS_ACTION_ATTR_METER, /* u32 meter ID. */ OVS_ACTION_ATTR_CLONE, /* Nested OVS_CLONE_ATTR_*. */ OVS_ACTION_ATTR_CHECK_PKT_LEN, /* Nested OVS_CHECK_PKT_LEN_ATTR_*. */ + OVS_ACTION_ATTR_ADD_MPLS, /* struct ovs_action_add_mpls. */ __OVS_ACTION_ATTR_MAX, /* Nothing past this will be accepted * from userspace. */ diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index 4c8395462303..7fbfe2adfffa 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -161,16 +161,17 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb, const struct nlattr *attr, int len); static int push_mpls(struct sk_buff *skb, struct sw_flow_key *key, - const struct ovs_action_push_mpls *mpls) + __be32 mpls_lse, __be16 mpls_ethertype, __u16 mac_len) { int err; - err = skb_mpls_push(skb, mpls->mpls_lse, mpls->mpls_ethertype, - skb->mac_len, - ovs_key_mac_proto(key) == MAC_PROTO_ETHERNET); + err = skb_mpls_push(skb, mpls_lse, mpls_ethertype, mac_len, !!mac_len); if (err) return err; + if (!mac_len) + key->mac_proto = MAC_PROTO_NONE; + invalidate_flow_key(key); return 0; } @@ -185,6 +186,9 @@ static int pop_mpls(struct sk_buff *skb, struct sw_flow_key *key, if (err) return err; + if (ethertype == htons(ETH_P_TEB)) + key->mac_proto = MAC_PROTO_ETHERNET; + invalidate_flow_key(key); return 0; } @@ -1229,10 +1233,24 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb, execute_hash(skb, key, a); break; - case OVS_ACTION_ATTR_PUSH_MPLS: - err = push_mpls(skb, key, nla_data(a)); + case OVS_ACTION_ATTR_PUSH_MPLS: { + struct ovs_action_push_mpls *mpls = nla_data(a); + + err = push_mpls(skb, key, mpls->mpls_lse, + mpls->mpls_ethertype, skb->mac_len); break; + } + case OVS_ACTION_ATTR_ADD_MPLS: { + struct ovs_action_add_mpls *mpls = nla_data(a); + __u16 mac_len = 0; + + if (mpls->tun_flags & OVS_MPLS_L3_TUNNEL_FLAG_MASK) + mac_len = skb->mac_len; + err = push_mpls(skb, key, mpls->mpls_lse, + mpls->mpls_ethertype, mac_len); + break; + } case OVS_ACTION_ATTR_POP_MPLS: err = pop_mpls(skb, key, nla_get_be16(a)); break; diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c index 65c2e3458ff5..7da4230627f5 100644 --- a/net/openvswitch/flow_netlink.c +++ b/net/openvswitch/flow_netlink.c @@ -79,6 +79,7 @@ static bool actions_may_change_flow(const struct nlattr *actions) case OVS_ACTION_ATTR_SET_MASKED: case OVS_ACTION_ATTR_METER: case OVS_ACTION_ATTR_CHECK_PKT_LEN: + case OVS_ACTION_ATTR_ADD_MPLS: default: return true; } @@ -3005,6 +3006,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr, [OVS_ACTION_ATTR_METER] = sizeof(u32), [OVS_ACTION_ATTR_CLONE] = (u32)-1, [OVS_ACTION_ATTR_CHECK_PKT_LEN] = (u32)-1, + [OVS_ACTION_ATTR_ADD_MPLS] = sizeof(struct ovs_action_add_mpls), }; const struct ovs_action_push_vlan *vlan; int type = nla_type(a); @@ -3072,6 +3074,33 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr, case OVS_ACTION_ATTR_RECIRC: break; + case OVS_ACTION_ATTR_ADD_MPLS: { + const struct ovs_action_add_mpls *mpls = nla_data(a); + + if (!eth_p_mpls(mpls->mpls_ethertype)) + return -EINVAL; + + if (mpls->tun_flags & OVS_MPLS_L3_TUNNEL_FLAG_MASK) { + if (vlan_tci & htons(VLAN_CFI_MASK) || + (eth_type != htons(ETH_P_IP) && + eth_type != htons(ETH_P_IPV6) && + eth_type != htons(ETH_P_ARP) && + eth_type != htons(ETH_P_RARP) && + !eth_p_mpls(eth_type))) + return -EINVAL; + mpls_label_count++; + } else { + if (mac_proto == MAC_PROTO_ETHERNET) { + mpls_label_count = 1; + mac_proto = MAC_PROTO_NONE; + } else { + mpls_label_count++; + } + } + eth_type = mpls->mpls_ethertype; + break; + } + case OVS_ACTION_ATTR_PUSH_MPLS: { const struct ovs_action_push_mpls *mpls = nla_data(a); @@ -3109,6 +3138,11 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr, * recirculation. */ proto = nla_get_be16(a); + + if (proto == htons(ETH_P_TEB) && + mac_proto != MAC_PROTO_NONE) + return -EINVAL; + mpls_label_count--; if (!eth_p_mpls(proto) || !mpls_label_count) -- cgit v1.2.3-71-gd317 From 6b722237b656d045c0b9bab9966a5e46604258ba Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Mon, 23 Dec 2019 15:28:12 +0200 Subject: net: fib_notifier: Add temporary events to the FIB notification chain Subsequent patches are going to simplify the IPv6 route offload API, which will only use three events - replace, delete and append. Introduce a temporary version of replace and delete in order to make the conversion easier to review. Note that append does not need a temporary version, as it is currently not used. Signed-off-by: Ido Schimmel Reviewed-by: Jiri Pirko Reviewed-by: David Ahern Signed-off-by: David S. Miller --- include/net/fib_notifier.h | 2 ++ 1 file changed, 2 insertions(+) (limited to 'include') diff --git a/include/net/fib_notifier.h b/include/net/fib_notifier.h index 6d59221ff05a..b3c54325caec 100644 --- a/include/net/fib_notifier.h +++ b/include/net/fib_notifier.h @@ -23,6 +23,8 @@ enum fib_event_type { FIB_EVENT_NH_DEL, FIB_EVENT_VIF_ADD, FIB_EVENT_VIF_DEL, + FIB_EVENT_ENTRY_REPLACE_TMP, + FIB_EVENT_ENTRY_DEL_TMP, }; struct fib_notifier_ops { -- cgit v1.2.3-71-gd317 From d2f0c9b11410f9c6a07c126f8a215b0b81cdcf6c Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Mon, 23 Dec 2019 15:28:17 +0200 Subject: ipv6: Handle route deletion notification For the purpose of route offload, when a single route is deleted, it is only of interest if it is the first route in the node or if it is sibling to such a route. In the first case, distinguish between several possibilities: 1. Route is the last route in the node. Emit a delete notification 2. Route is followed by a non-multipath route. Emit a replace notification for the non-multipath route. 3. Route is followed by a multipath route. Emit a replace notification for the multipath route. In the second case, only emit a delete notification to ensure the route is no longer used as a valid nexthop. Signed-off-by: Ido Schimmel Reviewed-by: Jiri Pirko Reviewed-by: David Ahern Signed-off-by: David S. Miller --- include/net/ip6_fib.h | 1 + net/ipv6/ip6_fib.c | 44 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 44 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h index f1535f172935..b579faea41e9 100644 --- a/include/net/ip6_fib.h +++ b/include/net/ip6_fib.h @@ -487,6 +487,7 @@ int call_fib6_multipath_entry_notifiers(struct net *net, struct fib6_info *rt, unsigned int nsiblings, struct netlink_ext_ack *extack); +int call_fib6_entry_notifiers_replace(struct net *net, struct fib6_info *rt); void fib6_rt_update(struct net *net, struct fib6_info *rt, struct nl_info *info); void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info, diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index 51cf848e38f0..67ddee539f77 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -415,6 +415,18 @@ int call_fib6_multipath_entry_notifiers(struct net *net, return call_fib6_notifiers(net, event_type, &info.info); } +int call_fib6_entry_notifiers_replace(struct net *net, struct fib6_info *rt) +{ + struct fib6_entry_notifier_info info = { + .rt = rt, + .nsiblings = rt->fib6_nsiblings, + }; + + rt->fib6_table->fib_seq++; + return call_fib6_notifiers(net, FIB_EVENT_ENTRY_REPLACE_TMP, + &info.info); +} + struct fib6_dump_arg { struct net *net; struct notifier_block *nb; @@ -1910,13 +1922,29 @@ static struct fib6_node *fib6_repair_tree(struct net *net, static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, struct fib6_info __rcu **rtp, struct nl_info *info) { + struct fib6_info *leaf, *replace_rt = NULL; struct fib6_walker *w; struct fib6_info *rt = rcu_dereference_protected(*rtp, lockdep_is_held(&table->tb6_lock)); struct net *net = info->nl_net; + bool notify_del = false; RT6_TRACE("fib6_del_route\n"); + /* If the deleted route is the first in the node and it is not part of + * a multipath route, then we need to replace it with the next route + * in the node, if exists. + */ + leaf = rcu_dereference_protected(fn->leaf, + lockdep_is_held(&table->tb6_lock)); + if (leaf == rt && !rt->fib6_nsiblings) { + if (rcu_access_pointer(rt->fib6_next)) + replace_rt = rcu_dereference_protected(rt->fib6_next, + lockdep_is_held(&table->tb6_lock)); + else + notify_del = true; + } + /* Unlink it */ *rtp = rt->fib6_next; rt->fib6_node = NULL; @@ -1934,6 +1962,14 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, if (rt->fib6_nsiblings) { struct fib6_info *sibling, *next_sibling; + /* The route is deleted from a multipath route. If this + * multipath route is the first route in the node, then we need + * to emit a delete notification. Otherwise, we need to skip + * the notification. + */ + if (rt->fib6_metric == leaf->fib6_metric && + rt6_qualify_for_ecmp(leaf)) + notify_del = true; list_for_each_entry_safe(sibling, next_sibling, &rt->fib6_siblings, fib6_siblings) sibling->fib6_nsiblings--; @@ -1969,8 +2005,14 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, fib6_purge_rt(rt, fn, net); - if (!info->skip_notify_kernel) + if (!info->skip_notify_kernel) { + if (notify_del) + call_fib6_entry_notifiers(net, FIB_EVENT_ENTRY_DEL_TMP, + rt, NULL); + else if (replace_rt) + call_fib6_entry_notifiers_replace(net, replace_rt); call_fib6_entry_notifiers(net, FIB_EVENT_ENTRY_DEL, rt, NULL); + } if (!info->skip_notify) inet6_rt_notify(RTM_DELROUTE, rt, info, 0); -- cgit v1.2.3-71-gd317 From caafb2509fac1432849650826953dd88b7cbe374 Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Mon, 23 Dec 2019 15:28:20 +0200 Subject: ipv6: Remove old route notifications and convert listeners Now that mlxsw is converted to use the new FIB notifications it is possible to delete the old ones and use the new replace / append / delete notifications. Signed-off-by: Ido Schimmel Reviewed-by: Jiri Pirko Reviewed-by: David Ahern Signed-off-by: David S. Miller --- .../net/ethernet/mellanox/mlxsw/spectrum_router.c | 9 ++-- drivers/net/netdevsim/fib.c | 1 - include/net/fib_notifier.h | 2 - net/ipv6/ip6_fib.c | 61 ++++++---------------- net/ipv6/route.c | 18 +------ 5 files changed, 21 insertions(+), 70 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 295cdcb1c4c0..f62e8d67348c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -5966,7 +5966,7 @@ static void mlxsw_sp_router_fib6_event_work(struct work_struct *work) mlxsw_sp_span_respin(mlxsw_sp); switch (fib_work->event) { - case FIB_EVENT_ENTRY_REPLACE_TMP: + case FIB_EVENT_ENTRY_REPLACE: err = mlxsw_sp_router_fib6_replace(mlxsw_sp, fib_work->fib6_work.rt_arr, fib_work->fib6_work.nrt6); @@ -5982,7 +5982,7 @@ static void mlxsw_sp_router_fib6_event_work(struct work_struct *work) mlxsw_sp_router_fib_abort(mlxsw_sp); mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work); break; - case FIB_EVENT_ENTRY_DEL_TMP: + case FIB_EVENT_ENTRY_DEL: mlxsw_sp_router_fib6_del(mlxsw_sp, fib_work->fib6_work.rt_arr, fib_work->fib6_work.nrt6); @@ -6068,9 +6068,9 @@ static int mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event_work *fib_work, int err; switch (fib_work->event) { - case FIB_EVENT_ENTRY_REPLACE_TMP: /* fall through */ + case FIB_EVENT_ENTRY_REPLACE: /* fall through */ case FIB_EVENT_ENTRY_APPEND: /* fall through */ - case FIB_EVENT_ENTRY_DEL_TMP: + case FIB_EVENT_ENTRY_DEL: fen6_info = container_of(info, struct fib6_entry_notifier_info, info); err = mlxsw_sp_router_fib6_work_init(&fib_work->fib6_work, @@ -6174,7 +6174,6 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb, return notifier_from_errno(err); case FIB_EVENT_ENTRY_ADD: /* fall through */ case FIB_EVENT_ENTRY_REPLACE: /* fall through */ - case FIB_EVENT_ENTRY_REPLACE_TMP: /* fall through */ case FIB_EVENT_ENTRY_APPEND: if (router->aborted) { NL_SET_ERR_MSG_MOD(info->extack, "FIB offload was aborted. Not configuring route"); diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c index 4e02a4231fcb..b5df308b4e33 100644 --- a/drivers/net/netdevsim/fib.c +++ b/drivers/net/netdevsim/fib.c @@ -178,7 +178,6 @@ static int nsim_fib_event_nb(struct notifier_block *nb, unsigned long event, break; case FIB_EVENT_ENTRY_REPLACE: /* fall through */ - case FIB_EVENT_ENTRY_ADD: /* fall through */ case FIB_EVENT_ENTRY_DEL: err = nsim_fib_event(data, info, event != FIB_EVENT_ENTRY_DEL); break; diff --git a/include/net/fib_notifier.h b/include/net/fib_notifier.h index b3c54325caec..6d59221ff05a 100644 --- a/include/net/fib_notifier.h +++ b/include/net/fib_notifier.h @@ -23,8 +23,6 @@ enum fib_event_type { FIB_EVENT_NH_DEL, FIB_EVENT_VIF_ADD, FIB_EVENT_VIF_DEL, - FIB_EVENT_ENTRY_REPLACE_TMP, - FIB_EVENT_ENTRY_DEL_TMP, }; struct fib_notifier_ops { diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index 67ddee539f77..b1e9a10e1133 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -423,8 +423,7 @@ int call_fib6_entry_notifiers_replace(struct net *net, struct fib6_info *rt) }; rt->fib6_table->fib_seq++; - return call_fib6_notifiers(net, FIB_EVENT_ENTRY_REPLACE_TMP, - &info.info); + return call_fib6_notifiers(net, FIB_EVENT_ENTRY_REPLACE, &info.info); } struct fib6_dump_arg { @@ -435,15 +434,7 @@ struct fib6_dump_arg { static int fib6_rt_dump(struct fib6_info *rt, struct fib6_dump_arg *arg) { - if (rt == arg->net->ipv6.fib6_null_entry) - return 0; - return call_fib6_entry_notifier(arg->nb, FIB_EVENT_ENTRY_ADD, - rt, arg->extack); -} - -static int fib6_rt_dump_tmp(struct fib6_info *rt, struct fib6_dump_arg *arg) -{ - enum fib_event_type fib_event = FIB_EVENT_ENTRY_REPLACE_TMP; + enum fib_event_type fib_event = FIB_EVENT_ENTRY_REPLACE; int err; if (!rt || rt == arg->net->ipv6.fib6_null_entry) @@ -463,19 +454,9 @@ static int fib6_rt_dump_tmp(struct fib6_info *rt, struct fib6_dump_arg *arg) static int fib6_node_dump(struct fib6_walker *w) { - struct fib6_info *rt; - int err = 0; - - err = fib6_rt_dump_tmp(w->leaf, w->args); - if (err) - goto out; + int err; - for_each_fib6_walker_rt(w) { - err = fib6_rt_dump(rt, w->args); - if (err) - break; - } -out: + err = fib6_rt_dump(w->leaf, w->args); w->leaf = NULL; return err; } @@ -1220,25 +1201,21 @@ next_iter: add: nlflags |= NLM_F_CREATE; - if (!info->skip_notify_kernel) { + /* The route should only be notified if it is the first + * route in the node or if it is added as a sibling + * route to the first route in the node. + */ + if (!info->skip_notify_kernel && + (notify_sibling_rt || ins == &fn->leaf)) { enum fib_event_type fib_event; if (notify_sibling_rt) fib_event = FIB_EVENT_ENTRY_APPEND; else - fib_event = FIB_EVENT_ENTRY_REPLACE_TMP; - /* The route should only be notified if it is the first - * route in the node or if it is added as a sibling - * route to the first route in the node. - */ - if (notify_sibling_rt || ins == &fn->leaf) - err = call_fib6_entry_notifiers(info->nl_net, - fib_event, rt, - extack); - + fib_event = FIB_EVENT_ENTRY_REPLACE; err = call_fib6_entry_notifiers(info->nl_net, - FIB_EVENT_ENTRY_ADD, - rt, extack); + fib_event, rt, + extack); if (err) { struct fib6_info *sibling, *next_sibling; @@ -1282,14 +1259,7 @@ add: return -ENOENT; } - if (!info->skip_notify_kernel) { - enum fib_event_type fib_event; - - fib_event = FIB_EVENT_ENTRY_REPLACE_TMP; - if (ins == &fn->leaf) - err = call_fib6_entry_notifiers(info->nl_net, - fib_event, rt, - extack); + if (!info->skip_notify_kernel && ins == &fn->leaf) { err = call_fib6_entry_notifiers(info->nl_net, FIB_EVENT_ENTRY_REPLACE, rt, extack); @@ -2007,11 +1977,10 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, if (!info->skip_notify_kernel) { if (notify_del) - call_fib6_entry_notifiers(net, FIB_EVENT_ENTRY_DEL_TMP, + call_fib6_entry_notifiers(net, FIB_EVENT_ENTRY_DEL, rt, NULL); else if (replace_rt) call_fib6_entry_notifiers_replace(net, replace_rt); - call_fib6_entry_notifiers(net, FIB_EVENT_ENTRY_DEL, rt, NULL); } if (!info->skip_notify) inet6_rt_notify(RTM_DELROUTE, rt, info, 0); diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 646716a47cc9..4b8659e077d3 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -3787,15 +3787,10 @@ static int __ip6_del_rt_siblings(struct fib6_info *rt, struct fib6_config *cfg) replace_rt); else call_fib6_multipath_entry_notifiers(net, - FIB_EVENT_ENTRY_DEL_TMP, + FIB_EVENT_ENTRY_DEL, rt, rt->fib6_nsiblings, NULL); } - call_fib6_multipath_entry_notifiers(net, - FIB_EVENT_ENTRY_DEL, - rt, - rt->fib6_nsiblings, - NULL); list_for_each_entry_safe(sibling, next_sibling, &rt->fib6_siblings, fib6_siblings) { @@ -5074,7 +5069,6 @@ static int ip6_route_multipath_add(struct fib6_config *cfg, { struct fib6_info *rt_notif = NULL, *rt_last = NULL; struct nl_info *info = &cfg->fc_nlinfo; - enum fib_event_type event_type; struct fib6_config r_cfg; struct rtnexthop *rtnh; struct fib6_info *rt; @@ -5210,7 +5204,7 @@ static int ip6_route_multipath_add(struct fib6_config *cfg, if (rt_notif->fib6_nsiblings != nhn - 1) fib_event = FIB_EVENT_ENTRY_APPEND; else - fib_event = FIB_EVENT_ENTRY_REPLACE_TMP; + fib_event = FIB_EVENT_ENTRY_REPLACE; err = call_fib6_multipath_entry_notifiers(info->nl_net, fib_event, rt_notif, @@ -5221,14 +5215,6 @@ static int ip6_route_multipath_add(struct fib6_config *cfg, goto add_errout; } } - event_type = replace ? FIB_EVENT_ENTRY_REPLACE : FIB_EVENT_ENTRY_ADD; - err = call_fib6_multipath_entry_notifiers(info->nl_net, event_type, - rt_notif, nhn - 1, extack); - if (err) { - /* Delete all the siblings that were just added */ - err_nh = NULL; - goto add_errout; - } /* success ... tell user about new route */ ip6_route_mpath_notify(rt_notif, rt_last, info, nlflags); -- cgit v1.2.3-71-gd317 From 0e5dafc8a6e540c0145b61545c557c43be70af10 Mon Sep 17 00:00:00 2001 From: Richard Cochran Date: Wed, 25 Dec 2019 18:16:09 -0800 Subject: net: phy: Introduce helper functions for time stamping support. Some parts of the networking stack and at least one driver test fields within the 'struct phy_device' in order to query time stamping capabilities and to invoke time stamping methods. This patch adds a functional interface around the time stamping fields. This will allow insulating the callers from future changes to the details of the time stamping implemenation. Signed-off-by: Richard Cochran Reviewed-by: Andrew Lunn Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- include/linux/phy.h | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) (limited to 'include') diff --git a/include/linux/phy.h b/include/linux/phy.h index 7d530b3f8855..0248f5e9939d 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -936,6 +936,66 @@ static inline bool phy_polling_mode(struct phy_device *phydev) return phydev->irq == PHY_POLL; } +/** + * phy_has_hwtstamp - Tests whether a PHY time stamp configuration. + * @phydev: the phy_device struct + */ +static inline bool phy_has_hwtstamp(struct phy_device *phydev) +{ + return phydev && phydev->drv && phydev->drv->hwtstamp; +} + +/** + * phy_has_rxtstamp - Tests whether a PHY supports receive time stamping. + * @phydev: the phy_device struct + */ +static inline bool phy_has_rxtstamp(struct phy_device *phydev) +{ + return phydev && phydev->drv && phydev->drv->rxtstamp; +} + +/** + * phy_has_tsinfo - Tests whether a PHY reports time stamping and/or + * PTP hardware clock capabilities. + * @phydev: the phy_device struct + */ +static inline bool phy_has_tsinfo(struct phy_device *phydev) +{ + return phydev && phydev->drv && phydev->drv->ts_info; +} + +/** + * phy_has_txtstamp - Tests whether a PHY supports transmit time stamping. + * @phydev: the phy_device struct + */ +static inline bool phy_has_txtstamp(struct phy_device *phydev) +{ + return phydev && phydev->drv && phydev->drv->txtstamp; +} + +static inline int phy_hwtstamp(struct phy_device *phydev, struct ifreq *ifr) +{ + return phydev->drv->hwtstamp(phydev, ifr); +} + +static inline bool phy_rxtstamp(struct phy_device *phydev, struct sk_buff *skb, + int type) +{ + return phydev->drv->rxtstamp(phydev, skb, type); +} + +static inline int phy_ts_info(struct phy_device *phydev, + struct ethtool_ts_info *tsinfo) +{ + return phydev->drv->ts_info(phydev, tsinfo); +} + +static inline void phy_txtstamp(struct phy_device *phydev, struct sk_buff *skb, + int type) +{ + phydev->drv->txtstamp(phydev, skb, type); +} + /** * phy_is_internal - Convenience function for testing if a PHY is internal * @phydev: the phy_device struct -- cgit v1.2.3-71-gd317 From 4715f65ffa0520af0680dbfbedbe349f175adaf4 Mon Sep 17 00:00:00 2001 From: Richard Cochran Date: Wed, 25 Dec 2019 18:16:15 -0800 Subject: net: Introduce a new MII time stamping interface. Currently the stack supports time stamping in PHY devices. However, there are newer, non-PHY devices that can snoop an MII bus and provide time stamps. In order to support such devices, this patch introduces a new interface to be used by both PHY and non-PHY devices. In addition, the one and only user of the old PHY time stamping API is converted to the new interface. Signed-off-by: Richard Cochran Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- drivers/net/phy/dp83640.c | 39 ++++++++++++++++----------- drivers/net/phy/phy.c | 4 +-- drivers/net/phy/phy_device.c | 2 ++ include/linux/mii_timestamper.h | 58 +++++++++++++++++++++++++++++++++++++++++ include/linux/phy.h | 41 +++++++---------------------- net/core/timestamping.c | 20 +++++++------- 6 files changed, 106 insertions(+), 58 deletions(-) create mode 100644 include/linux/mii_timestamper.h (limited to 'include') diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c index b58abdb5491e..ac72a324fcd1 100644 --- a/drivers/net/phy/dp83640.c +++ b/drivers/net/phy/dp83640.c @@ -98,6 +98,7 @@ struct dp83640_private { struct list_head list; struct dp83640_clock *clock; struct phy_device *phydev; + struct mii_timestamper mii_ts; struct delayed_work ts_work; int hwts_tx_en; int hwts_rx_en; @@ -1229,9 +1230,10 @@ static int dp83640_config_intr(struct phy_device *phydev) } } -static int dp83640_hwtstamp(struct phy_device *phydev, struct ifreq *ifr) +static int dp83640_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr) { - struct dp83640_private *dp83640 = phydev->priv; + struct dp83640_private *dp83640 = + container_of(mii_ts, struct dp83640_private, mii_ts); struct hwtstamp_config cfg; u16 txcfg0, rxcfg0; @@ -1307,8 +1309,8 @@ static int dp83640_hwtstamp(struct phy_device *phydev, struct ifreq *ifr) mutex_lock(&dp83640->clock->extreg_lock); - ext_write(0, phydev, PAGE5, PTP_TXCFG0, txcfg0); - ext_write(0, phydev, PAGE5, PTP_RXCFG0, rxcfg0); + ext_write(0, dp83640->phydev, PAGE5, PTP_TXCFG0, txcfg0); + ext_write(0, dp83640->phydev, PAGE5, PTP_RXCFG0, rxcfg0); mutex_unlock(&dp83640->clock->extreg_lock); @@ -1338,10 +1340,11 @@ static void rx_timestamp_work(struct work_struct *work) schedule_delayed_work(&dp83640->ts_work, SKB_TIMESTAMP_TIMEOUT); } -static bool dp83640_rxtstamp(struct phy_device *phydev, +static bool dp83640_rxtstamp(struct mii_timestamper *mii_ts, struct sk_buff *skb, int type) { - struct dp83640_private *dp83640 = phydev->priv; + struct dp83640_private *dp83640 = + container_of(mii_ts, struct dp83640_private, mii_ts); struct dp83640_skb_info *skb_info = (struct dp83640_skb_info *)skb->cb; struct list_head *this, *next; struct rxts *rxts; @@ -1387,11 +1390,12 @@ static bool dp83640_rxtstamp(struct phy_device *phydev, return true; } -static void dp83640_txtstamp(struct phy_device *phydev, +static void dp83640_txtstamp(struct mii_timestamper *mii_ts, struct sk_buff *skb, int type) { struct dp83640_skb_info *skb_info = (struct dp83640_skb_info *)skb->cb; - struct dp83640_private *dp83640 = phydev->priv; + struct dp83640_private *dp83640 = + container_of(mii_ts, struct dp83640_private, mii_ts); switch (dp83640->hwts_tx_en) { @@ -1414,9 +1418,11 @@ static void dp83640_txtstamp(struct phy_device *phydev, } } -static int dp83640_ts_info(struct phy_device *dev, struct ethtool_ts_info *info) +static int dp83640_ts_info(struct mii_timestamper *mii_ts, + struct ethtool_ts_info *info) { - struct dp83640_private *dp83640 = dev->priv; + struct dp83640_private *dp83640 = + container_of(mii_ts, struct dp83640_private, mii_ts); info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE | @@ -1454,13 +1460,18 @@ static int dp83640_probe(struct phy_device *phydev) goto no_memory; dp83640->phydev = phydev; - INIT_DELAYED_WORK(&dp83640->ts_work, rx_timestamp_work); + dp83640->mii_ts.rxtstamp = dp83640_rxtstamp; + dp83640->mii_ts.txtstamp = dp83640_txtstamp; + dp83640->mii_ts.hwtstamp = dp83640_hwtstamp; + dp83640->mii_ts.ts_info = dp83640_ts_info; + INIT_DELAYED_WORK(&dp83640->ts_work, rx_timestamp_work); INIT_LIST_HEAD(&dp83640->rxts); INIT_LIST_HEAD(&dp83640->rxpool); for (i = 0; i < MAX_RXTS; i++) list_add(&dp83640->rx_pool_data[i].list, &dp83640->rxpool); + phydev->mii_ts = &dp83640->mii_ts; phydev->priv = dp83640; spin_lock_init(&dp83640->rx_lock); @@ -1501,6 +1512,8 @@ static void dp83640_remove(struct phy_device *phydev) if (phydev->mdio.addr == BROADCAST_ADDR) return; + phydev->mii_ts = NULL; + enable_status_frames(phydev, false); cancel_delayed_work_sync(&dp83640->ts_work); @@ -1537,10 +1550,6 @@ static struct phy_driver dp83640_driver = { .config_init = dp83640_config_init, .ack_interrupt = dp83640_ack_interrupt, .config_intr = dp83640_config_intr, - .ts_info = dp83640_ts_info, - .hwtstamp = dp83640_hwtstamp, - .rxtstamp = dp83640_rxtstamp, - .txtstamp = dp83640_txtstamp, }; static int __init dp83640_init(void) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 80be4d691e5b..541ed01496bf 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -422,8 +422,8 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) return 0; case SIOCSHWTSTAMP: - if (phydev->drv && phydev->drv->hwtstamp) - return phydev->drv->hwtstamp(phydev, ifr); + if (phydev->mii_ts && phydev->mii_ts->hwtstamp) + return phydev->mii_ts->hwtstamp(phydev->mii_ts, ifr); /* fall through */ default: diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index 94961e7592e4..e55119cd6c86 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -919,6 +919,8 @@ static void phy_link_change(struct phy_device *phydev, bool up, bool do_carrier) netif_carrier_off(netdev); } phydev->adjust_link(netdev); + if (phydev->mii_ts && phydev->mii_ts->link_state) + phydev->mii_ts->link_state(phydev->mii_ts, phydev); } /** diff --git a/include/linux/mii_timestamper.h b/include/linux/mii_timestamper.h new file mode 100644 index 000000000000..36002386029c --- /dev/null +++ b/include/linux/mii_timestamper.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Support for generic time stamping devices on MII buses. + * Copyright (C) 2018 Richard Cochran + */ +#ifndef _LINUX_MII_TIMESTAMPER_H +#define _LINUX_MII_TIMESTAMPER_H + +#include +#include +#include + +struct phy_device; + +/** + * struct mii_timestamper - Callback interface to MII time stamping devices. + * + * @rxtstamp: Requests a Rx timestamp for 'skb'. If the skb is accepted, + * the MII time stamping device promises to deliver it using + * netif_rx() as soon as a timestamp becomes available. One of + * the PTP_CLASS_ values is passed in 'type'. The function + * must return true if the skb is accepted for delivery. + * + * @txtstamp: Requests a Tx timestamp for 'skb'. The MII time stamping + * device promises to deliver it using skb_complete_tx_timestamp() + * as soon as a timestamp becomes available. One of the PTP_CLASS_ + * values is passed in 'type'. + * + * @hwtstamp: Handles SIOCSHWTSTAMP ioctl for hardware time stamping. + * + * @link_state: Allows the device to respond to changes in the link + * state. The caller invokes this function while holding + * the phy_device mutex. + * + * @ts_info: Handles ethtool queries for hardware time stamping. + * + * Drivers for PHY time stamping devices should embed their + * mii_timestamper within a private structure, obtaining a reference + * to it using container_of(). + */ +struct mii_timestamper { + bool (*rxtstamp)(struct mii_timestamper *mii_ts, + struct sk_buff *skb, int type); + + void (*txtstamp)(struct mii_timestamper *mii_ts, + struct sk_buff *skb, int type); + + int (*hwtstamp)(struct mii_timestamper *mii_ts, + struct ifreq *ifreq); + + void (*link_state)(struct mii_timestamper *mii_ts, + struct phy_device *phydev); + + int (*ts_info)(struct mii_timestamper *mii_ts, + struct ethtool_ts_info *ts_info); +}; + +#endif diff --git a/include/linux/phy.h b/include/linux/phy.h index 0248f5e9939d..30e599c454db 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -441,6 +442,7 @@ struct phy_device { struct sfp_bus *sfp_bus; struct phylink *phylink; struct net_device *attached_dev; + struct mii_timestamper *mii_ts; u8 mdix; u8 mdix_ctrl; @@ -546,29 +548,6 @@ struct phy_driver { */ int (*match_phy_device)(struct phy_device *phydev); - /* Handles ethtool queries for hardware time stamping. */ - int (*ts_info)(struct phy_device *phydev, struct ethtool_ts_info *ti); - - /* Handles SIOCSHWTSTAMP ioctl for hardware time stamping. */ - int (*hwtstamp)(struct phy_device *phydev, struct ifreq *ifr); - - /* - * Requests a Rx timestamp for 'skb'. If the skb is accepted, - * the phy driver promises to deliver it using netif_rx() as - * soon as a timestamp becomes available. One of the - * PTP_CLASS_ values is passed in 'type'. The function must - * return true if the skb is accepted for delivery. - */ - bool (*rxtstamp)(struct phy_device *dev, struct sk_buff *skb, int type); - - /* - * Requests a Tx timestamp for 'skb'. The phy driver promises - * to deliver it using skb_complete_tx_timestamp() as soon as a - * timestamp becomes available. One of the PTP_CLASS_ values - * is passed in 'type'. - */ - void (*txtstamp)(struct phy_device *dev, struct sk_buff *skb, int type); - /* Some devices (e.g. qnap TS-119P II) require PHY register changes to * enable Wake on LAN, so set_wol is provided to be called in the * ethernet driver's set_wol function. */ @@ -942,7 +921,7 @@ static inline bool phy_polling_mode(struct phy_device *phydev) */ static inline bool phy_has_hwtstamp(struct phy_device *phydev) { - return phydev && phydev->drv && phydev->drv->hwtstamp; + return phydev && phydev->mii_ts && phydev->mii_ts->hwtstamp; } /** @@ -951,7 +930,7 @@ static inline bool phy_has_hwtstamp(struct phy_device *phydev) */ static inline bool phy_has_rxtstamp(struct phy_device *phydev) { - return phydev && phydev->drv && phydev->drv->rxtstamp; + return phydev && phydev->mii_ts && phydev->mii_ts->rxtstamp; } /** @@ -961,7 +940,7 @@ static inline bool phy_has_rxtstamp(struct phy_device *phydev) */ static inline bool phy_has_tsinfo(struct phy_device *phydev) { - return phydev && phydev->drv && phydev->drv->ts_info; + return phydev && phydev->mii_ts && phydev->mii_ts->ts_info; } /** @@ -970,30 +949,30 @@ static inline bool phy_has_tsinfo(struct phy_device *phydev) */ static inline bool phy_has_txtstamp(struct phy_device *phydev) { - return phydev && phydev->drv && phydev->drv->txtstamp; + return phydev && phydev->mii_ts && phydev->mii_ts->txtstamp; } static inline int phy_hwtstamp(struct phy_device *phydev, struct ifreq *ifr) { - return phydev->drv->hwtstamp(phydev, ifr); + return phydev->mii_ts->hwtstamp(phydev->mii_ts, ifr); } static inline bool phy_rxtstamp(struct phy_device *phydev, struct sk_buff *skb, int type) { - return phydev->drv->rxtstamp(phydev, skb, type); + return phydev->mii_ts->rxtstamp(phydev->mii_ts, skb, type); } static inline int phy_ts_info(struct phy_device *phydev, struct ethtool_ts_info *tsinfo) { - return phydev->drv->ts_info(phydev, tsinfo); + return phydev->mii_ts->ts_info(phydev->mii_ts, tsinfo); } static inline void phy_txtstamp(struct phy_device *phydev, struct sk_buff *skb, int type) { - phydev->drv->txtstamp(phydev, skb, type); + phydev->mii_ts->txtstamp(phydev->mii_ts, skb, type); } /** diff --git a/net/core/timestamping.c b/net/core/timestamping.c index 7911235706a9..04840697fe79 100644 --- a/net/core/timestamping.c +++ b/net/core/timestamping.c @@ -13,7 +13,7 @@ static unsigned int classify(const struct sk_buff *skb) { if (likely(skb->dev && skb->dev->phydev && - skb->dev->phydev->drv)) + skb->dev->phydev->mii_ts)) return ptp_classify_raw(skb); else return PTP_CLASS_NONE; @@ -21,7 +21,7 @@ static unsigned int classify(const struct sk_buff *skb) void skb_clone_tx_timestamp(struct sk_buff *skb) { - struct phy_device *phydev; + struct mii_timestamper *mii_ts; struct sk_buff *clone; unsigned int type; @@ -32,22 +32,22 @@ void skb_clone_tx_timestamp(struct sk_buff *skb) if (type == PTP_CLASS_NONE) return; - phydev = skb->dev->phydev; - if (likely(phydev->drv->txtstamp)) { + mii_ts = skb->dev->phydev->mii_ts; + if (likely(mii_ts->txtstamp)) { clone = skb_clone_sk(skb); if (!clone) return; - phydev->drv->txtstamp(phydev, clone, type); + mii_ts->txtstamp(mii_ts, clone, type); } } EXPORT_SYMBOL_GPL(skb_clone_tx_timestamp); bool skb_defer_rx_timestamp(struct sk_buff *skb) { - struct phy_device *phydev; + struct mii_timestamper *mii_ts; unsigned int type; - if (!skb->dev || !skb->dev->phydev || !skb->dev->phydev->drv) + if (!skb->dev || !skb->dev->phydev || !skb->dev->phydev->mii_ts) return false; if (skb_headroom(skb) < ETH_HLEN) @@ -62,9 +62,9 @@ bool skb_defer_rx_timestamp(struct sk_buff *skb) if (type == PTP_CLASS_NONE) return false; - phydev = skb->dev->phydev; - if (likely(phydev->drv->rxtstamp)) - return phydev->drv->rxtstamp(phydev, skb, type); + mii_ts = skb->dev->phydev->mii_ts; + if (likely(mii_ts->rxtstamp)) + return mii_ts->rxtstamp(mii_ts, skb, type); return false; } -- cgit v1.2.3-71-gd317 From 767ff483731502a0fc34f34a3a0851aca175eb71 Mon Sep 17 00:00:00 2001 From: Richard Cochran Date: Wed, 25 Dec 2019 18:16:16 -0800 Subject: net: Add a layer for non-PHY MII time stamping drivers. While PHY time stamping drivers can simply attach their interface directly to the PHY instance, stand alone drivers require support in order to manage their services. Non-PHY MII time stamping drivers have a control interface over another bus like I2C, SPI, UART, or via a memory mapped peripheral. The controller device will be associated with one or more time stamping channels, each of which sits snoops in on a MII bus. This patch provides a glue layer that will enable time stamping channels to find their controlling device. Signed-off-by: Richard Cochran Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- drivers/net/phy/Makefile | 2 + drivers/net/phy/mii_timestamper.c | 125 ++++++++++++++++++++++++++++++++++++++ include/linux/mii_timestamper.h | 63 +++++++++++++++++++ net/Kconfig | 7 ++- 4 files changed, 194 insertions(+), 3 deletions(-) create mode 100644 drivers/net/phy/mii_timestamper.c (limited to 'include') diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile index d846b4dc1c68..fe5badf13b65 100644 --- a/drivers/net/phy/Makefile +++ b/drivers/net/phy/Makefile @@ -43,6 +43,8 @@ obj-$(CONFIG_MDIO_SUN4I) += mdio-sun4i.o obj-$(CONFIG_MDIO_THUNDER) += mdio-thunder.o obj-$(CONFIG_MDIO_XGENE) += mdio-xgene.o +obj-$(CONFIG_NETWORK_PHY_TIMESTAMPING) += mii_timestamper.o + obj-$(CONFIG_SFP) += sfp.o sfp-obj-$(CONFIG_SFP) += sfp-bus.o obj-y += $(sfp-obj-y) $(sfp-obj-m) diff --git a/drivers/net/phy/mii_timestamper.c b/drivers/net/phy/mii_timestamper.c new file mode 100644 index 000000000000..2f12c5d901df --- /dev/null +++ b/drivers/net/phy/mii_timestamper.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// Support for generic time stamping devices on MII buses. +// Copyright (C) 2018 Richard Cochran +// + +#include + +static LIST_HEAD(mii_timestamping_devices); +static DEFINE_MUTEX(tstamping_devices_lock); + +struct mii_timestamping_desc { + struct list_head list; + struct mii_timestamping_ctrl *ctrl; + struct device *device; +}; + +/** + * register_mii_tstamp_controller() - registers an MII time stamping device. + * + * @device: The device to be registered. + * @ctrl: Pointer to device's control interface. + * + * Returns zero on success or non-zero on failure. + */ +int register_mii_tstamp_controller(struct device *device, + struct mii_timestamping_ctrl *ctrl) +{ + struct mii_timestamping_desc *desc; + + desc = kzalloc(sizeof(*desc), GFP_KERNEL); + if (!desc) + return -ENOMEM; + + INIT_LIST_HEAD(&desc->list); + desc->ctrl = ctrl; + desc->device = device; + + mutex_lock(&tstamping_devices_lock); + list_add_tail(&mii_timestamping_devices, &desc->list); + mutex_unlock(&tstamping_devices_lock); + + return 0; +} +EXPORT_SYMBOL(register_mii_tstamp_controller); + +/** + * unregister_mii_tstamp_controller() - unregisters an MII time stamping device. + * + * @device: A device previously passed to register_mii_tstamp_controller(). + */ +void unregister_mii_tstamp_controller(struct device *device) +{ + struct mii_timestamping_desc *desc; + struct list_head *this, *next; + + mutex_lock(&tstamping_devices_lock); + list_for_each_safe(this, next, &mii_timestamping_devices) { + desc = list_entry(this, struct mii_timestamping_desc, list); + if (desc->device == device) { + list_del_init(&desc->list); + kfree(desc); + break; + } + } + mutex_unlock(&tstamping_devices_lock); +} +EXPORT_SYMBOL(unregister_mii_tstamp_controller); + +/** + * register_mii_timestamper - Enables a given port of an MII time stamper. + * + * @node: The device tree node of the MII time stamp controller. + * @port: The index of the port to be enabled. + * + * Returns a valid interface on success or ERR_PTR otherwise. + */ +struct mii_timestamper *register_mii_timestamper(struct device_node *node, + unsigned int port) +{ + struct mii_timestamper *mii_ts = NULL; + struct mii_timestamping_desc *desc; + struct list_head *this; + + mutex_lock(&tstamping_devices_lock); + list_for_each(this, &mii_timestamping_devices) { + desc = list_entry(this, struct mii_timestamping_desc, list); + if (desc->device->of_node == node) { + mii_ts = desc->ctrl->probe_channel(desc->device, port); + if (!IS_ERR(mii_ts)) { + mii_ts->device = desc->device; + get_device(desc->device); + } + break; + } + } + mutex_unlock(&tstamping_devices_lock); + + return mii_ts ? mii_ts : ERR_PTR(-EPROBE_DEFER); +} +EXPORT_SYMBOL(register_mii_timestamper); + +/** + * unregister_mii_timestamper - Disables a given MII time stamper. + * + * @mii_ts: An interface obtained via register_mii_timestamper(). + * + */ +void unregister_mii_timestamper(struct mii_timestamper *mii_ts) +{ + struct mii_timestamping_desc *desc; + struct list_head *this; + + mutex_lock(&tstamping_devices_lock); + list_for_each(this, &mii_timestamping_devices) { + desc = list_entry(this, struct mii_timestamping_desc, list); + if (desc->device == mii_ts->device) { + desc->ctrl->release_channel(desc->device, mii_ts); + put_device(desc->device); + break; + } + } + mutex_unlock(&tstamping_devices_lock); +} +EXPORT_SYMBOL(unregister_mii_timestamper); diff --git a/include/linux/mii_timestamper.h b/include/linux/mii_timestamper.h index 36002386029c..fa940bbaf8ae 100644 --- a/include/linux/mii_timestamper.h +++ b/include/linux/mii_timestamper.h @@ -33,10 +33,15 @@ struct phy_device; * the phy_device mutex. * * @ts_info: Handles ethtool queries for hardware time stamping. + * @device: Remembers the device to which the instance belongs. * * Drivers for PHY time stamping devices should embed their * mii_timestamper within a private structure, obtaining a reference * to it using container_of(). + * + * Drivers for non-PHY time stamping devices should return a pointer + * to a mii_timestamper from the probe_channel() callback of their + * mii_timestamping_ctrl interface. */ struct mii_timestamper { bool (*rxtstamp)(struct mii_timestamper *mii_ts, @@ -53,6 +58,64 @@ struct mii_timestamper { int (*ts_info)(struct mii_timestamper *mii_ts, struct ethtool_ts_info *ts_info); + + struct device *device; +}; + +/** + * struct mii_timestamping_ctrl - MII time stamping controller interface. + * + * @probe_channel: Callback into the controller driver announcing the + * presence of the 'port' channel. The 'device' field + * had been passed to register_mii_tstamp_controller(). + * The driver must return either a pointer to a valid + * MII timestamper instance or PTR_ERR. + * + * @release_channel: Releases an instance obtained via .probe_channel. + */ +struct mii_timestamping_ctrl { + struct mii_timestamper *(*probe_channel)(struct device *device, + unsigned int port); + void (*release_channel)(struct device *device, + struct mii_timestamper *mii_ts); }; +#ifdef CONFIG_NETWORK_PHY_TIMESTAMPING + +int register_mii_tstamp_controller(struct device *device, + struct mii_timestamping_ctrl *ctrl); + +void unregister_mii_tstamp_controller(struct device *device); + +struct mii_timestamper *register_mii_timestamper(struct device_node *node, + unsigned int port); + +void unregister_mii_timestamper(struct mii_timestamper *mii_ts); + +#else + +static inline +int register_mii_tstamp_controller(struct device *device, + struct mii_timestamping_ctrl *ctrl) +{ + return -EOPNOTSUPP; +} + +static inline void unregister_mii_tstamp_controller(struct device *device) +{ +} + +static inline +struct mii_timestamper *register_mii_timestamper(struct device_node *node, + unsigned int port) +{ + return NULL; +} + +static inline void unregister_mii_timestamper(struct mii_timestamper *mii_ts) +{ +} + +#endif + #endif diff --git a/net/Kconfig b/net/Kconfig index bd191f978a23..52af65e5d28c 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -108,9 +108,10 @@ config NETWORK_PHY_TIMESTAMPING bool "Timestamping in PHY devices" select NET_PTP_CLASSIFY help - This allows timestamping of network packets by PHYs with - hardware timestamping capabilities. This option adds some - overhead in the transmit and receive paths. + This allows timestamping of network packets by PHYs (or + other MII bus snooping devices) with hardware timestamping + capabilities. This option adds some overhead in the transmit + and receive paths. If you are unsure how to answer this question, answer N. -- cgit v1.2.3-71-gd317 From b6fd7b96366769651ab23988607ce9c5c9042cdb Mon Sep 17 00:00:00 2001 From: Richard Cochran Date: Wed, 25 Dec 2019 18:16:19 -0800 Subject: net: Introduce peer to peer one step PTP time stamping. The 1588 standard defines one step operation for both Sync and PDelay_Resp messages. Up until now, hardware with P2P one step has been rare, and kernel support was lacking. This patch adds support of the mode in anticipation of new hardware developments. Signed-off-by: Richard Cochran Signed-off-by: David S. Miller --- drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 1 + drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c | 1 + drivers/net/ethernet/microchip/lan743x_ptp.c | 3 +++ drivers/net/ethernet/qlogic/qede/qede_ptp.c | 1 + include/uapi/linux/net_tstamp.h | 8 ++++++++ net/core/dev_ioctl.c | 1 + 6 files changed, 15 insertions(+) (limited to 'include') diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c index cff64e43bdd8..741d865e4afc 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c @@ -15410,6 +15410,7 @@ int bnx2x_configure_ptp_filters(struct bnx2x *bp) REG_WR(bp, rule, BNX2X_PTP_TX_ON_RULE_MASK); break; case HWTSTAMP_TX_ONESTEP_SYNC: + case HWTSTAMP_TX_ONESTEP_P2P: BNX2X_ERR("One-step timestamping is not supported\n"); return -ERANGE; } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c index ec2ff3d7f41c..4aaaa4937b1a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c @@ -920,6 +920,7 @@ static int mlxsw_sp_ptp_get_message_types(const struct hwtstamp_config *config, egr_types = 0xff; break; case HWTSTAMP_TX_ONESTEP_SYNC: + case HWTSTAMP_TX_ONESTEP_P2P: return -ERANGE; } diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c index afe52463dc57..9399f6a98748 100644 --- a/drivers/net/ethernet/microchip/lan743x_ptp.c +++ b/drivers/net/ethernet/microchip/lan743x_ptp.c @@ -1265,6 +1265,9 @@ int lan743x_ptp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) lan743x_ptp_set_sync_ts_insert(adapter, true); break; + case HWTSTAMP_TX_ONESTEP_P2P: + ret = -ERANGE; + break; default: netif_warn(adapter, drv, adapter->netdev, " tx_type = %d, UNKNOWN\n", config.tx_type); diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c index f815435cf106..4c7f7a7fc151 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c @@ -247,6 +247,7 @@ static int qede_ptp_cfg_filters(struct qede_dev *edev) break; case HWTSTAMP_TX_ONESTEP_SYNC: + case HWTSTAMP_TX_ONESTEP_P2P: DP_ERR(edev, "One-step timestamping is not supported\n"); return -ERANGE; } diff --git a/include/uapi/linux/net_tstamp.h b/include/uapi/linux/net_tstamp.h index e5b39721c6e4..f96e650d0af9 100644 --- a/include/uapi/linux/net_tstamp.h +++ b/include/uapi/linux/net_tstamp.h @@ -90,6 +90,14 @@ enum hwtstamp_tx_types { * queue. */ HWTSTAMP_TX_ONESTEP_SYNC, + + /* + * Same as HWTSTAMP_TX_ONESTEP_SYNC, but also enables time + * stamp insertion directly into PDelay_Resp packets. In this + * case, neither transmitted Sync nor PDelay_Resp packets will + * receive a time stamp via the socket error queue. + */ + HWTSTAMP_TX_ONESTEP_P2P, }; /* possible values for hwtstamp_config->rx_filter */ diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c index 5163d900bb4f..dbaebbe573f0 100644 --- a/net/core/dev_ioctl.c +++ b/net/core/dev_ioctl.c @@ -187,6 +187,7 @@ static int net_hwtstamp_validate(struct ifreq *ifr) case HWTSTAMP_TX_OFF: case HWTSTAMP_TX_ON: case HWTSTAMP_TX_ONESTEP_SYNC: + case HWTSTAMP_TX_ONESTEP_P2P: tx_type_valid = 1; break; } -- cgit v1.2.3-71-gd317 From c14ceb0ec727187f71a487a592ffa91767fed66e Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Wed, 18 Dec 2019 12:05:21 +0100 Subject: netfilter: nft_meta: add support for slave device ifindex matching Allow to match on vrf slave ifindex or name. In case there was no slave interface involved, store 0 in the destination register just like existing iif/oif matching. sdif(name) is restricted to the ipv4/ipv6 input and forward hooks, as it depends on ip(6) stack parsing/storing info in skb->cb[]. Cc: Martin Willi Cc: David Ahern Cc: Shrijeet Mukherjee Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso --- include/uapi/linux/netfilter/nf_tables.h | 4 ++ net/netfilter/nft_meta.c | 76 +++++++++++++++++++++++++++++--- 2 files changed, 73 insertions(+), 7 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index bb9b049310df..e237ecbdcd8a 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -805,6 +805,8 @@ enum nft_exthdr_attributes { * @NFT_META_TIME_NS: time since epoch (in nanoseconds) * @NFT_META_TIME_DAY: day of week (from 0 = Sunday to 6 = Saturday) * @NFT_META_TIME_HOUR: hour of day (in seconds) + * @NFT_META_SDIF: slave device interface index + * @NFT_META_SDIFNAME: slave device interface name */ enum nft_meta_keys { NFT_META_LEN, @@ -840,6 +842,8 @@ enum nft_meta_keys { NFT_META_TIME_NS, NFT_META_TIME_DAY, NFT_META_TIME_HOUR, + NFT_META_SDIF, + NFT_META_SDIFNAME, }; /** diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c index fb1a571db924..951b6e87ed5d 100644 --- a/net/netfilter/nft_meta.c +++ b/net/netfilter/nft_meta.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include /* for TCP_TIME_WAIT */ #include @@ -287,6 +288,28 @@ nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest) } #endif +static noinline u32 nft_meta_get_eval_sdif(const struct nft_pktinfo *pkt) +{ + switch (nft_pf(pkt)) { + case NFPROTO_IPV4: + return inet_sdif(pkt->skb); + case NFPROTO_IPV6: + return inet6_sdif(pkt->skb); + } + + return 0; +} + +static noinline void +nft_meta_get_eval_sdifname(u32 *dest, const struct nft_pktinfo *pkt) +{ + u32 sdif = nft_meta_get_eval_sdif(pkt); + const struct net_device *dev; + + dev = sdif ? dev_get_by_index_rcu(nft_net(pkt), sdif) : NULL; + nft_meta_store_ifname(dest, dev); +} + void nft_meta_get_eval(const struct nft_expr *expr, struct nft_regs *regs, const struct nft_pktinfo *pkt) @@ -379,6 +402,12 @@ void nft_meta_get_eval(const struct nft_expr *expr, case NFT_META_TIME_HOUR: nft_meta_get_eval_time(priv->key, dest); break; + case NFT_META_SDIF: + *dest = nft_meta_get_eval_sdif(pkt); + break; + case NFT_META_SDIFNAME: + nft_meta_get_eval_sdifname(dest, pkt); + break; default: WARN_ON(1); goto err; @@ -459,6 +488,7 @@ int nft_meta_get_init(const struct nft_ctx *ctx, case NFT_META_MARK: case NFT_META_IIF: case NFT_META_OIF: + case NFT_META_SDIF: case NFT_META_SKUID: case NFT_META_SKGID: #ifdef CONFIG_IP_ROUTE_CLASSID @@ -480,6 +510,7 @@ int nft_meta_get_init(const struct nft_ctx *ctx, case NFT_META_OIFNAME: case NFT_META_IIFKIND: case NFT_META_OIFKIND: + case NFT_META_SDIFNAME: len = IFNAMSIZ; break; case NFT_META_PRANDOM: @@ -510,16 +541,28 @@ int nft_meta_get_init(const struct nft_ctx *ctx, } EXPORT_SYMBOL_GPL(nft_meta_get_init); -static int nft_meta_get_validate(const struct nft_ctx *ctx, - const struct nft_expr *expr, - const struct nft_data **data) +static int nft_meta_get_validate_sdif(const struct nft_ctx *ctx) { -#ifdef CONFIG_XFRM - const struct nft_meta *priv = nft_expr_priv(expr); unsigned int hooks; - if (priv->key != NFT_META_SECPATH) - return 0; + switch (ctx->family) { + case NFPROTO_IPV4: + case NFPROTO_IPV6: + case NFPROTO_INET: + hooks = (1 << NF_INET_LOCAL_IN) | + (1 << NF_INET_FORWARD); + break; + default: + return -EOPNOTSUPP; + } + + return nft_chain_validate_hooks(ctx->chain, hooks); +} + +static int nft_meta_get_validate_xfrm(const struct nft_ctx *ctx) +{ +#ifdef CONFIG_XFRM + unsigned int hooks; switch (ctx->family) { case NFPROTO_NETDEV: @@ -542,6 +585,25 @@ static int nft_meta_get_validate(const struct nft_ctx *ctx, #endif } +static int nft_meta_get_validate(const struct nft_ctx *ctx, + const struct nft_expr *expr, + const struct nft_data **data) +{ + const struct nft_meta *priv = nft_expr_priv(expr); + + switch (priv->key) { + case NFT_META_SECPATH: + return nft_meta_get_validate_xfrm(ctx); + case NFT_META_SDIF: + case NFT_META_SDIFNAME: + return nft_meta_get_validate_sdif(ctx); + default: + break; + } + + return 0; +} + int nft_meta_set_validate(const struct nft_ctx *ctx, const struct nft_expr *expr, const struct nft_data **data) -- cgit v1.2.3-71-gd317 From f643ee295c1c63bc117fb052d4da681354d6f732 Mon Sep 17 00:00:00 2001 From: Kevin Kou Date: Thu, 26 Dec 2019 12:29:17 +0000 Subject: sctp: move trace_sctp_probe_path into sctp_outq_sack The original patch bringed in the "SCTP ACK tracking trace event" feature was committed at Dec.20, 2017, it replaced jprobe usage with trace events, and bringed in two trace events, one is TRACE_EVENT(sctp_probe), another one is TRACE_EVENT(sctp_probe_path). The original patch intended to trigger the trace_sctp_probe_path in TRACE_EVENT(sctp_probe) as below code, +TRACE_EVENT(sctp_probe, + + TP_PROTO(const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + struct sctp_chunk *chunk), + + TP_ARGS(ep, asoc, chunk), + + TP_STRUCT__entry( + __field(__u64, asoc) + __field(__u32, mark) + __field(__u16, bind_port) + __field(__u16, peer_port) + __field(__u32, pathmtu) + __field(__u32, rwnd) + __field(__u16, unack_data) + ), + + TP_fast_assign( + struct sk_buff *skb = chunk->skb; + + __entry->asoc = (unsigned long)asoc; + __entry->mark = skb->mark; + __entry->bind_port = ep->base.bind_addr.port; + __entry->peer_port = asoc->peer.port; + __entry->pathmtu = asoc->pathmtu; + __entry->rwnd = asoc->peer.rwnd; + __entry->unack_data = asoc->unack_data; + + if (trace_sctp_probe_path_enabled()) { + struct sctp_transport *sp; + + list_for_each_entry(sp, &asoc->peer.transport_addr_list, + transports) { + trace_sctp_probe_path(sp, asoc); + } + } + ), But I found it did not work when I did testing, and trace_sctp_probe_path had no output, I finally found that there is trace buffer lock operation(trace_event_buffer_reserve) in include/trace/trace_events.h: static notrace void \ trace_event_raw_event_##call(void *__data, proto) \ { \ struct trace_event_file *trace_file = __data; \ struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\ struct trace_event_buffer fbuffer; \ struct trace_event_raw_##call *entry; \ int __data_size; \ \ if (trace_trigger_soft_disabled(trace_file)) \ return; \ \ __data_size = trace_event_get_offsets_##call(&__data_offsets, args); \ \ entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ sizeof(*entry) + __data_size); \ \ if (!entry) \ return; \ \ tstruct \ \ { assign; } \ \ trace_event_buffer_commit(&fbuffer); \ } The reason caused no output of trace_sctp_probe_path is that trace_sctp_probe_path written in TP_fast_assign part of TRACE_EVENT(sctp_probe), and it will be placed( { assign; } ) after the trace_event_buffer_reserve() when compiler expands Macro, entry = trace_event_buffer_reserve(&fbuffer, trace_file, \ sizeof(*entry) + __data_size); \ \ if (!entry) \ return; \ \ tstruct \ \ { assign; } \ so trace_sctp_probe_path finally can not acquire trace_event_buffer and return no output, that is to say the nest of tracepoint entry function is not allowed. The function call flow is: trace_sctp_probe() -> trace_event_raw_event_sctp_probe() -> lock buffer -> trace_sctp_probe_path() -> trace_event_raw_event_sctp_probe_path() --nested -> buffer has been locked and return no output. This patch is to remove trace_sctp_probe_path from the TP_fast_assign part of TRACE_EVENT(sctp_probe) to avoid the nest of entry function, and trigger sctp_probe_path_trace in sctp_outq_sack. After this patch, you can enable both events individually, # cd /sys/kernel/debug/tracing # echo 1 > events/sctp/sctp_probe/enable # echo 1 > events/sctp/sctp_probe_path/enable Or, you can enable all the events under sctp. # echo 1 > events/sctp/enable Signed-off-by: Kevin Kou Acked-by: Marcelo Ricardo Leitner Signed-off-by: David S. Miller --- include/trace/events/sctp.h | 9 --------- net/sctp/outqueue.c | 6 ++++++ 2 files changed, 6 insertions(+), 9 deletions(-) (limited to 'include') diff --git a/include/trace/events/sctp.h b/include/trace/events/sctp.h index 7475c7be165a..d4aac3436595 100644 --- a/include/trace/events/sctp.h +++ b/include/trace/events/sctp.h @@ -75,15 +75,6 @@ TRACE_EVENT(sctp_probe, __entry->pathmtu = asoc->pathmtu; __entry->rwnd = asoc->peer.rwnd; __entry->unack_data = asoc->unack_data; - - if (trace_sctp_probe_path_enabled()) { - struct sctp_transport *sp; - - list_for_each_entry(sp, &asoc->peer.transport_addr_list, - transports) { - trace_sctp_probe_path(sp, asoc); - } - } ), TP_printk("asoc=%#llx mark=%#x bind_port=%d peer_port=%d pathmtu=%d " diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c index a031d1134c60..6b0b3bad4daa 100644 --- a/net/sctp/outqueue.c +++ b/net/sctp/outqueue.c @@ -36,6 +36,7 @@ #include #include #include +#include /* Declare internal functions here. */ static int sctp_acked(struct sctp_sackhdr *sack, __u32 tsn); @@ -1238,6 +1239,11 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_chunk *chunk) /* Grab the association's destination address list. */ transport_list = &asoc->peer.transport_addr_list; + /* SCTP path tracepoint for congestion control debugging. */ + list_for_each_entry(transport, transport_list, transports) { + trace_sctp_probe_path(transport, asoc); + } + sack_ctsn = ntohl(sack->cum_tsn_ack); gap_ack_blocks = ntohs(sack->num_gap_ack_blocks); asoc->stats.gapcnt += gap_ack_blocks; -- cgit v1.2.3-71-gd317 From c1e469902640ed0a82d0b8df5951199c17a49e3c Mon Sep 17 00:00:00 2001 From: Andy Roulin Date: Thu, 26 Dec 2019 05:41:57 -0800 Subject: bonding: rename AD_STATE_* to LACP_STATE_* As the LACP actor/partner state is now part of the uapi, rename the 3ad state defines with LACP prefix. The LACP prefix is preferred over BOND_3AD as the LACP standard moved to 802.1AX. Fixes: 826f66b30c2e3 ("bonding: move 802.3ad port state flags to uapi") Signed-off-by: Andy Roulin Signed-off-by: David S. Miller --- drivers/net/bonding/bond_3ad.c | 112 ++++++++++++++++++++-------------------- include/uapi/linux/if_bonding.h | 16 +++--- 2 files changed, 64 insertions(+), 64 deletions(-) (limited to 'include') diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index 34bfe99641a3..31e43a2197a3 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -447,8 +447,8 @@ static void __choose_matched(struct lacpdu *lacpdu, struct port *port) MAC_ADDRESS_EQUAL(&(lacpdu->partner_system), &(port->actor_system)) && (ntohs(lacpdu->partner_system_priority) == port->actor_system_priority) && (ntohs(lacpdu->partner_key) == port->actor_oper_port_key) && - ((lacpdu->partner_state & AD_STATE_AGGREGATION) == (port->actor_oper_port_state & AD_STATE_AGGREGATION))) || - ((lacpdu->actor_state & AD_STATE_AGGREGATION) == 0) + ((lacpdu->partner_state & LACP_STATE_AGGREGATION) == (port->actor_oper_port_state & LACP_STATE_AGGREGATION))) || + ((lacpdu->actor_state & LACP_STATE_AGGREGATION) == 0) ) { port->sm_vars |= AD_PORT_MATCHED; } else { @@ -482,18 +482,18 @@ static void __record_pdu(struct lacpdu *lacpdu, struct port *port) partner->port_state = lacpdu->actor_state; /* set actor_oper_port_state.defaulted to FALSE */ - port->actor_oper_port_state &= ~AD_STATE_DEFAULTED; + port->actor_oper_port_state &= ~LACP_STATE_DEFAULTED; /* set the partner sync. to on if the partner is sync, * and the port is matched */ if ((port->sm_vars & AD_PORT_MATCHED) && - (lacpdu->actor_state & AD_STATE_SYNCHRONIZATION)) { - partner->port_state |= AD_STATE_SYNCHRONIZATION; + (lacpdu->actor_state & LACP_STATE_SYNCHRONIZATION)) { + partner->port_state |= LACP_STATE_SYNCHRONIZATION; slave_dbg(port->slave->bond->dev, port->slave->dev, "partner sync=1\n"); } else { - partner->port_state &= ~AD_STATE_SYNCHRONIZATION; + partner->port_state &= ~LACP_STATE_SYNCHRONIZATION; slave_dbg(port->slave->bond->dev, port->slave->dev, "partner sync=0\n"); } @@ -516,7 +516,7 @@ static void __record_default(struct port *port) sizeof(struct port_params)); /* set actor_oper_port_state.defaulted to true */ - port->actor_oper_port_state |= AD_STATE_DEFAULTED; + port->actor_oper_port_state |= LACP_STATE_DEFAULTED; } } @@ -546,7 +546,7 @@ static void __update_selected(struct lacpdu *lacpdu, struct port *port) !MAC_ADDRESS_EQUAL(&lacpdu->actor_system, &partner->system) || ntohs(lacpdu->actor_system_priority) != partner->system_priority || ntohs(lacpdu->actor_key) != partner->key || - (lacpdu->actor_state & AD_STATE_AGGREGATION) != (partner->port_state & AD_STATE_AGGREGATION)) { + (lacpdu->actor_state & LACP_STATE_AGGREGATION) != (partner->port_state & LACP_STATE_AGGREGATION)) { port->sm_vars &= ~AD_PORT_SELECTED; } } @@ -578,8 +578,8 @@ static void __update_default_selected(struct port *port) !MAC_ADDRESS_EQUAL(&admin->system, &oper->system) || admin->system_priority != oper->system_priority || admin->key != oper->key || - (admin->port_state & AD_STATE_AGGREGATION) - != (oper->port_state & AD_STATE_AGGREGATION)) { + (admin->port_state & LACP_STATE_AGGREGATION) + != (oper->port_state & LACP_STATE_AGGREGATION)) { port->sm_vars &= ~AD_PORT_SELECTED; } } @@ -609,10 +609,10 @@ static void __update_ntt(struct lacpdu *lacpdu, struct port *port) !MAC_ADDRESS_EQUAL(&(lacpdu->partner_system), &(port->actor_system)) || (ntohs(lacpdu->partner_system_priority) != port->actor_system_priority) || (ntohs(lacpdu->partner_key) != port->actor_oper_port_key) || - ((lacpdu->partner_state & AD_STATE_LACP_ACTIVITY) != (port->actor_oper_port_state & AD_STATE_LACP_ACTIVITY)) || - ((lacpdu->partner_state & AD_STATE_LACP_TIMEOUT) != (port->actor_oper_port_state & AD_STATE_LACP_TIMEOUT)) || - ((lacpdu->partner_state & AD_STATE_SYNCHRONIZATION) != (port->actor_oper_port_state & AD_STATE_SYNCHRONIZATION)) || - ((lacpdu->partner_state & AD_STATE_AGGREGATION) != (port->actor_oper_port_state & AD_STATE_AGGREGATION)) + ((lacpdu->partner_state & LACP_STATE_LACP_ACTIVITY) != (port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY)) || + ((lacpdu->partner_state & LACP_STATE_LACP_TIMEOUT) != (port->actor_oper_port_state & LACP_STATE_LACP_TIMEOUT)) || + ((lacpdu->partner_state & LACP_STATE_SYNCHRONIZATION) != (port->actor_oper_port_state & LACP_STATE_SYNCHRONIZATION)) || + ((lacpdu->partner_state & LACP_STATE_AGGREGATION) != (port->actor_oper_port_state & LACP_STATE_AGGREGATION)) ) { port->ntt = true; } @@ -968,7 +968,7 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr) * edable port will take place only after this timer) */ if ((port->sm_vars & AD_PORT_SELECTED) && - (port->partner_oper.port_state & AD_STATE_SYNCHRONIZATION) && + (port->partner_oper.port_state & LACP_STATE_SYNCHRONIZATION) && !__check_agg_selection_timer(port)) { if (port->aggregator->is_active) port->sm_mux_state = @@ -986,14 +986,14 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr) port->sm_mux_state = AD_MUX_DETACHED; } else if (port->aggregator->is_active) { port->actor_oper_port_state |= - AD_STATE_SYNCHRONIZATION; + LACP_STATE_SYNCHRONIZATION; } break; case AD_MUX_COLLECTING_DISTRIBUTING: if (!(port->sm_vars & AD_PORT_SELECTED) || (port->sm_vars & AD_PORT_STANDBY) || - !(port->partner_oper.port_state & AD_STATE_SYNCHRONIZATION) || - !(port->actor_oper_port_state & AD_STATE_SYNCHRONIZATION)) { + !(port->partner_oper.port_state & LACP_STATE_SYNCHRONIZATION) || + !(port->actor_oper_port_state & LACP_STATE_SYNCHRONIZATION)) { port->sm_mux_state = AD_MUX_ATTACHED; } else { /* if port state hasn't changed make @@ -1022,11 +1022,11 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr) port->sm_mux_state); switch (port->sm_mux_state) { case AD_MUX_DETACHED: - port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION; + port->actor_oper_port_state &= ~LACP_STATE_SYNCHRONIZATION; ad_disable_collecting_distributing(port, update_slave_arr); - port->actor_oper_port_state &= ~AD_STATE_COLLECTING; - port->actor_oper_port_state &= ~AD_STATE_DISTRIBUTING; + port->actor_oper_port_state &= ~LACP_STATE_COLLECTING; + port->actor_oper_port_state &= ~LACP_STATE_DISTRIBUTING; port->ntt = true; break; case AD_MUX_WAITING: @@ -1035,20 +1035,20 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr) case AD_MUX_ATTACHED: if (port->aggregator->is_active) port->actor_oper_port_state |= - AD_STATE_SYNCHRONIZATION; + LACP_STATE_SYNCHRONIZATION; else port->actor_oper_port_state &= - ~AD_STATE_SYNCHRONIZATION; - port->actor_oper_port_state &= ~AD_STATE_COLLECTING; - port->actor_oper_port_state &= ~AD_STATE_DISTRIBUTING; + ~LACP_STATE_SYNCHRONIZATION; + port->actor_oper_port_state &= ~LACP_STATE_COLLECTING; + port->actor_oper_port_state &= ~LACP_STATE_DISTRIBUTING; ad_disable_collecting_distributing(port, update_slave_arr); port->ntt = true; break; case AD_MUX_COLLECTING_DISTRIBUTING: - port->actor_oper_port_state |= AD_STATE_COLLECTING; - port->actor_oper_port_state |= AD_STATE_DISTRIBUTING; - port->actor_oper_port_state |= AD_STATE_SYNCHRONIZATION; + port->actor_oper_port_state |= LACP_STATE_COLLECTING; + port->actor_oper_port_state |= LACP_STATE_DISTRIBUTING; + port->actor_oper_port_state |= LACP_STATE_SYNCHRONIZATION; ad_enable_collecting_distributing(port, update_slave_arr); port->ntt = true; @@ -1146,7 +1146,7 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port) port->sm_vars |= AD_PORT_LACP_ENABLED; port->sm_vars &= ~AD_PORT_SELECTED; __record_default(port); - port->actor_oper_port_state &= ~AD_STATE_EXPIRED; + port->actor_oper_port_state &= ~LACP_STATE_EXPIRED; port->sm_rx_state = AD_RX_PORT_DISABLED; /* Fall Through */ @@ -1156,9 +1156,9 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port) case AD_RX_LACP_DISABLED: port->sm_vars &= ~AD_PORT_SELECTED; __record_default(port); - port->partner_oper.port_state &= ~AD_STATE_AGGREGATION; + port->partner_oper.port_state &= ~LACP_STATE_AGGREGATION; port->sm_vars |= AD_PORT_MATCHED; - port->actor_oper_port_state &= ~AD_STATE_EXPIRED; + port->actor_oper_port_state &= ~LACP_STATE_EXPIRED; break; case AD_RX_EXPIRED: /* Reset of the Synchronization flag (Standard 43.4.12) @@ -1167,19 +1167,19 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port) * case of EXPIRED even if LINK_DOWN didn't arrive for * the port. */ - port->partner_oper.port_state &= ~AD_STATE_SYNCHRONIZATION; + port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION; port->sm_vars &= ~AD_PORT_MATCHED; - port->partner_oper.port_state |= AD_STATE_LACP_TIMEOUT; - port->partner_oper.port_state |= AD_STATE_LACP_ACTIVITY; + port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT; + port->partner_oper.port_state |= LACP_STATE_LACP_ACTIVITY; port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT)); - port->actor_oper_port_state |= AD_STATE_EXPIRED; + port->actor_oper_port_state |= LACP_STATE_EXPIRED; port->sm_vars |= AD_PORT_CHURNED; break; case AD_RX_DEFAULTED: __update_default_selected(port); __record_default(port); port->sm_vars |= AD_PORT_MATCHED; - port->actor_oper_port_state &= ~AD_STATE_EXPIRED; + port->actor_oper_port_state &= ~LACP_STATE_EXPIRED; break; case AD_RX_CURRENT: /* detect loopback situation */ @@ -1192,8 +1192,8 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port) __update_selected(lacpdu, port); __update_ntt(lacpdu, port); __record_pdu(lacpdu, port); - port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(port->actor_oper_port_state & AD_STATE_LACP_TIMEOUT)); - port->actor_oper_port_state &= ~AD_STATE_EXPIRED; + port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(port->actor_oper_port_state & LACP_STATE_LACP_TIMEOUT)); + port->actor_oper_port_state &= ~LACP_STATE_EXPIRED; break; default: break; @@ -1221,7 +1221,7 @@ static void ad_churn_machine(struct port *port) if (port->sm_churn_actor_timer_counter && !(--port->sm_churn_actor_timer_counter) && port->sm_churn_actor_state == AD_CHURN_MONITOR) { - if (port->actor_oper_port_state & AD_STATE_SYNCHRONIZATION) { + if (port->actor_oper_port_state & LACP_STATE_SYNCHRONIZATION) { port->sm_churn_actor_state = AD_NO_CHURN; } else { port->churn_actor_count++; @@ -1231,7 +1231,7 @@ static void ad_churn_machine(struct port *port) if (port->sm_churn_partner_timer_counter && !(--port->sm_churn_partner_timer_counter) && port->sm_churn_partner_state == AD_CHURN_MONITOR) { - if (port->partner_oper.port_state & AD_STATE_SYNCHRONIZATION) { + if (port->partner_oper.port_state & LACP_STATE_SYNCHRONIZATION) { port->sm_churn_partner_state = AD_NO_CHURN; } else { port->churn_partner_count++; @@ -1288,7 +1288,7 @@ static void ad_periodic_machine(struct port *port) /* check if port was reinitialized */ if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) || - (!(port->actor_oper_port_state & AD_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & AD_STATE_LACP_ACTIVITY)) + (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) ) { port->sm_periodic_state = AD_NO_PERIODIC; } @@ -1305,11 +1305,11 @@ static void ad_periodic_machine(struct port *port) switch (port->sm_periodic_state) { case AD_FAST_PERIODIC: if (!(port->partner_oper.port_state - & AD_STATE_LACP_TIMEOUT)) + & LACP_STATE_LACP_TIMEOUT)) port->sm_periodic_state = AD_SLOW_PERIODIC; break; case AD_SLOW_PERIODIC: - if ((port->partner_oper.port_state & AD_STATE_LACP_TIMEOUT)) { + if ((port->partner_oper.port_state & LACP_STATE_LACP_TIMEOUT)) { port->sm_periodic_timer_counter = 0; port->sm_periodic_state = AD_PERIODIC_TX; } @@ -1325,7 +1325,7 @@ static void ad_periodic_machine(struct port *port) break; case AD_PERIODIC_TX: if (!(port->partner_oper.port_state & - AD_STATE_LACP_TIMEOUT)) + LACP_STATE_LACP_TIMEOUT)) port->sm_periodic_state = AD_SLOW_PERIODIC; else port->sm_periodic_state = AD_FAST_PERIODIC; @@ -1532,7 +1532,7 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr) ad_agg_selection_logic(aggregator, update_slave_arr); if (!port->aggregator->is_active) - port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION; + port->actor_oper_port_state &= ~LACP_STATE_SYNCHRONIZATION; } /* Decide if "agg" is a better choice for the new active aggregator that @@ -1838,13 +1838,13 @@ static void ad_initialize_port(struct port *port, int lacp_fast) port->actor_port_priority = 0xff; port->actor_port_aggregator_identifier = 0; port->ntt = false; - port->actor_admin_port_state = AD_STATE_AGGREGATION | - AD_STATE_LACP_ACTIVITY; - port->actor_oper_port_state = AD_STATE_AGGREGATION | - AD_STATE_LACP_ACTIVITY; + port->actor_admin_port_state = LACP_STATE_AGGREGATION | + LACP_STATE_LACP_ACTIVITY; + port->actor_oper_port_state = LACP_STATE_AGGREGATION | + LACP_STATE_LACP_ACTIVITY; if (lacp_fast) - port->actor_oper_port_state |= AD_STATE_LACP_TIMEOUT; + port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT; memcpy(&port->partner_admin, &tmpl, sizeof(tmpl)); memcpy(&port->partner_oper, &tmpl, sizeof(tmpl)); @@ -2095,10 +2095,10 @@ void bond_3ad_unbind_slave(struct slave *slave) aggregator->aggregator_identifier); /* Tell the partner that this port is not suitable for aggregation */ - port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION; - port->actor_oper_port_state &= ~AD_STATE_COLLECTING; - port->actor_oper_port_state &= ~AD_STATE_DISTRIBUTING; - port->actor_oper_port_state &= ~AD_STATE_AGGREGATION; + port->actor_oper_port_state &= ~LACP_STATE_SYNCHRONIZATION; + port->actor_oper_port_state &= ~LACP_STATE_COLLECTING; + port->actor_oper_port_state &= ~LACP_STATE_DISTRIBUTING; + port->actor_oper_port_state &= ~LACP_STATE_AGGREGATION; __update_lacpdu_from_port(port); ad_lacpdu_send(port); @@ -2685,9 +2685,9 @@ void bond_3ad_update_lacp_rate(struct bonding *bond) bond_for_each_slave(bond, slave, iter) { port = &(SLAVE_AD_INFO(slave)->port); if (lacp_fast) - port->actor_oper_port_state |= AD_STATE_LACP_TIMEOUT; + port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT; else - port->actor_oper_port_state &= ~AD_STATE_LACP_TIMEOUT; + port->actor_oper_port_state &= ~LACP_STATE_LACP_TIMEOUT; } spin_unlock_bh(&bond->mode_lock); } diff --git a/include/uapi/linux/if_bonding.h b/include/uapi/linux/if_bonding.h index 6829213a54c5..45f3750aa861 100644 --- a/include/uapi/linux/if_bonding.h +++ b/include/uapi/linux/if_bonding.h @@ -96,14 +96,14 @@ #define BOND_XMIT_POLICY_ENCAP34 4 /* encapsulated layer 3+4 */ /* 802.3ad port state definitions (43.4.2.2 in the 802.3ad standard) */ -#define AD_STATE_LACP_ACTIVITY 0x1 -#define AD_STATE_LACP_TIMEOUT 0x2 -#define AD_STATE_AGGREGATION 0x4 -#define AD_STATE_SYNCHRONIZATION 0x8 -#define AD_STATE_COLLECTING 0x10 -#define AD_STATE_DISTRIBUTING 0x20 -#define AD_STATE_DEFAULTED 0x40 -#define AD_STATE_EXPIRED 0x80 +#define LACP_STATE_LACP_ACTIVITY 0x1 +#define LACP_STATE_LACP_TIMEOUT 0x2 +#define LACP_STATE_AGGREGATION 0x4 +#define LACP_STATE_SYNCHRONIZATION 0x8 +#define LACP_STATE_COLLECTING 0x10 +#define LACP_STATE_DISTRIBUTING 0x20 +#define LACP_STATE_DEFAULTED 0x40 +#define LACP_STATE_EXPIRED 0x80 typedef struct ifbond { __s32 bond_mode; -- cgit v1.2.3-71-gd317 From 2b4a8990b7df55875745a80a609a1ceaaf51f322 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:18 +0100 Subject: ethtool: introduce ethtool netlink interface Basic genetlink and init infrastructure for the netlink interface, register genetlink family "ethtool". Add CONFIG_ETHTOOL_NETLINK Kconfig option to make the build optional. Add initial overall interface description into Documentation/networking/ethtool-netlink.rst, further patches will add more detailed information. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 216 +++++++++++++++++++++++++++ Documentation/networking/index.rst | 1 + include/linux/ethtool_netlink.h | 9 ++ include/uapi/linux/ethtool_netlink.h | 36 +++++ net/Kconfig | 8 + net/ethtool/Makefile | 6 +- net/ethtool/netlink.c | 33 ++++ net/ethtool/netlink.h | 10 ++ 8 files changed, 318 insertions(+), 1 deletion(-) create mode 100644 Documentation/networking/ethtool-netlink.rst create mode 100644 include/linux/ethtool_netlink.h create mode 100644 include/uapi/linux/ethtool_netlink.h create mode 100644 net/ethtool/netlink.c create mode 100644 net/ethtool/netlink.h (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst new file mode 100644 index 000000000000..9448442ad293 --- /dev/null +++ b/Documentation/networking/ethtool-netlink.rst @@ -0,0 +1,216 @@ +============================= +Netlink interface for ethtool +============================= + + +Basic information +================= + +Netlink interface for ethtool uses generic netlink family ``ethtool`` +(userspace application should use macros ``ETHTOOL_GENL_NAME`` and +``ETHTOOL_GENL_VERSION`` defined in ```` uapi +header). This family does not use a specific header, all information in +requests and replies is passed using netlink attributes. + +The ethtool netlink interface uses extended ACK for error and warning +reporting, userspace application developers are encouraged to make these +messages available to user in a suitable way. + +Requests can be divided into three categories: "get" (retrieving information), +"set" (setting parameters) and "action" (invoking an action). + +All "set" and "action" type requests require admin privileges +(``CAP_NET_ADMIN`` in the namespace). Most "get" type requests are allowed for +anyone but there are exceptions (where the response contains sensitive +information). In some cases, the request as such is allowed for anyone but +unprivileged users have attributes with sensitive information (e.g. +wake-on-lan password) omitted. + + +Conventions +=========== + +Attributes which represent a boolean value usually use NLA_U8 type so that we +can distinguish three states: "on", "off" and "not present" (meaning the +information is not available in "get" requests or value is not to be changed +in "set" requests). For these attributes, the "true" value should be passed as +number 1 but any non-zero value should be understood as "true" by recipient. +In the tables below, "bool" denotes NLA_U8 attributes interpreted in this way. + +In the message structure descriptions below, if an attribute name is suffixed +with "+", parent nest can contain multiple attributes of the same type. This +implements an array of entries. + + +Request header +============== + +Each request or reply message contains a nested attribute with common header. +Structure of this header is + + ============================== ====== ============================= + ``ETHTOOL_A_HEADER_DEV_INDEX`` u32 device ifindex + ``ETHTOOL_A_HEADER_DEV_NAME`` string device name + ``ETHTOOL_A_HEADER_FLAGS`` u32 flags common for all requests + ============================== ====== ============================= + +``ETHTOOL_A_HEADER_DEV_INDEX`` and ``ETHTOOL_A_HEADER_DEV_NAME`` identify the +device message relates to. One of them is sufficient in requests, if both are +used, they must identify the same device. Some requests, e.g. global string +sets, do not require device identification. Most ``GET`` requests also allow +dump requests without device identification to query the same information for +all devices providing it (each device in a separate message). + +``ETHTOOL_A_HEADER_FLAGS`` is a bitmap of request flags common for all request +types. The interpretation of these flags is the same for all request types but +the flags may not apply to requests. Recognized flags are: + + ================================= =================================== + ``ETHTOOL_FLAG_COMPACT_BITSETS`` use compact format bitsets in reply + ``ETHTOOL_FLAG_OMIT_REPLY`` omit optional reply (_SET and _ACT) + ================================= =================================== + +New request flags should follow the general idea that if the flag is not set, +the behaviour is backward compatible, i.e. requests from old clients not aware +of the flag should be interpreted the way the client expects. A client must +not set flags it does not understand. + + +List of message types +===================== + +All constants identifying message types use ``ETHTOOL_CMD_`` prefix and suffix +according to message purpose: + + ============== ====================================== + ``_GET`` userspace request to retrieve data + ``_SET`` userspace request to set data + ``_ACT`` userspace request to perform an action + ``_GET_REPLY`` kernel reply to a ``GET`` request + ``_SET_REPLY`` kernel reply to a ``SET`` request + ``_ACT_REPLY`` kernel reply to an ``ACT`` request + ``_NTF`` kernel notification + ============== ====================================== + +``GET`` requests are sent by userspace applications to retrieve device +information. They usually do not contain any message specific attributes. +Kernel replies with corresponding "GET_REPLY" message. For most types, ``GET`` +request with ``NLM_F_DUMP`` and no device identification can be used to query +the information for all devices supporting the request. + +If the data can be also modified, corresponding ``SET`` message with the same +layout as corresponding ``GET_REPLY`` is used to request changes. Only +attributes where a change is requested are included in such request (also, not +all attributes may be changed). Replies to most ``SET`` request consist only +of error code and extack; if kernel provides additional data, it is sent in +the form of corresponding ``SET_REPLY`` message which can be suppressed by +setting ``ETHTOOL_FLAG_OMIT_REPLY`` flag in request header. + +Data modification also triggers sending a ``NTF`` message with a notification. +These usually bear only a subset of attributes which was affected by the +change. The same notification is issued if the data is modified using other +means (mostly ioctl ethtool interface). Unlike notifications from ethtool +netlink code which are only sent if something actually changed, notifications +triggered by ioctl interface may be sent even if the request did not actually +change any data. + +``ACT`` messages request kernel (driver) to perform a specific action. If some +information is reported by kernel (which can be suppressed by setting +``ETHTOOL_FLAG_OMIT_REPLY`` flag in request header), the reply takes form of +an ``ACT_REPLY`` message. Performing an action also triggers a notification +(``NTF`` message). + +Later sections describe the format and semantics of these messages. + + +Request translation +=================== + +The following table maps ioctl commands to netlink commands providing their +functionality. Entries with "n/a" in right column are commands which do not +have their netlink replacement yet. + + =================================== ===================================== + ioctl command netlink command + =================================== ===================================== + ``ETHTOOL_GSET`` n/a + ``ETHTOOL_SSET`` n/a + ``ETHTOOL_GDRVINFO`` n/a + ``ETHTOOL_GREGS`` n/a + ``ETHTOOL_GWOL`` n/a + ``ETHTOOL_SWOL`` n/a + ``ETHTOOL_GMSGLVL`` n/a + ``ETHTOOL_SMSGLVL`` n/a + ``ETHTOOL_NWAY_RST`` n/a + ``ETHTOOL_GLINK`` n/a + ``ETHTOOL_GEEPROM`` n/a + ``ETHTOOL_SEEPROM`` n/a + ``ETHTOOL_GCOALESCE`` n/a + ``ETHTOOL_SCOALESCE`` n/a + ``ETHTOOL_GRINGPARAM`` n/a + ``ETHTOOL_SRINGPARAM`` n/a + ``ETHTOOL_GPAUSEPARAM`` n/a + ``ETHTOOL_SPAUSEPARAM`` n/a + ``ETHTOOL_GRXCSUM`` n/a + ``ETHTOOL_SRXCSUM`` n/a + ``ETHTOOL_GTXCSUM`` n/a + ``ETHTOOL_STXCSUM`` n/a + ``ETHTOOL_GSG`` n/a + ``ETHTOOL_SSG`` n/a + ``ETHTOOL_TEST`` n/a + ``ETHTOOL_GSTRINGS`` n/a + ``ETHTOOL_PHYS_ID`` n/a + ``ETHTOOL_GSTATS`` n/a + ``ETHTOOL_GTSO`` n/a + ``ETHTOOL_STSO`` n/a + ``ETHTOOL_GPERMADDR`` rtnetlink ``RTM_GETLINK`` + ``ETHTOOL_GUFO`` n/a + ``ETHTOOL_SUFO`` n/a + ``ETHTOOL_GGSO`` n/a + ``ETHTOOL_SGSO`` n/a + ``ETHTOOL_GFLAGS`` n/a + ``ETHTOOL_SFLAGS`` n/a + ``ETHTOOL_GPFLAGS`` n/a + ``ETHTOOL_SPFLAGS`` n/a + ``ETHTOOL_GRXFH`` n/a + ``ETHTOOL_SRXFH`` n/a + ``ETHTOOL_GGRO`` n/a + ``ETHTOOL_SGRO`` n/a + ``ETHTOOL_GRXRINGS`` n/a + ``ETHTOOL_GRXCLSRLCNT`` n/a + ``ETHTOOL_GRXCLSRULE`` n/a + ``ETHTOOL_GRXCLSRLALL`` n/a + ``ETHTOOL_SRXCLSRLDEL`` n/a + ``ETHTOOL_SRXCLSRLINS`` n/a + ``ETHTOOL_FLASHDEV`` n/a + ``ETHTOOL_RESET`` n/a + ``ETHTOOL_SRXNTUPLE`` n/a + ``ETHTOOL_GRXNTUPLE`` n/a + ``ETHTOOL_GSSET_INFO`` n/a + ``ETHTOOL_GRXFHINDIR`` n/a + ``ETHTOOL_SRXFHINDIR`` n/a + ``ETHTOOL_GFEATURES`` n/a + ``ETHTOOL_SFEATURES`` n/a + ``ETHTOOL_GCHANNELS`` n/a + ``ETHTOOL_SCHANNELS`` n/a + ``ETHTOOL_SET_DUMP`` n/a + ``ETHTOOL_GET_DUMP_FLAG`` n/a + ``ETHTOOL_GET_DUMP_DATA`` n/a + ``ETHTOOL_GET_TS_INFO`` n/a + ``ETHTOOL_GMODULEINFO`` n/a + ``ETHTOOL_GMODULEEEPROM`` n/a + ``ETHTOOL_GEEE`` n/a + ``ETHTOOL_SEEE`` n/a + ``ETHTOOL_GRSSH`` n/a + ``ETHTOOL_SRSSH`` n/a + ``ETHTOOL_GTUNABLE`` n/a + ``ETHTOOL_STUNABLE`` n/a + ``ETHTOOL_GPHYSTATS`` n/a + ``ETHTOOL_PERQUEUE`` n/a + ``ETHTOOL_GLINKSETTINGS`` n/a + ``ETHTOOL_SLINKSETTINGS`` n/a + ``ETHTOOL_PHY_GTUNABLE`` n/a + ``ETHTOOL_PHY_STUNABLE`` n/a + ``ETHTOOL_GFECPARAM`` n/a + ``ETHTOOL_SFECPARAM`` n/a + =================================== ===================================== diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst index 5acab1290e03..bee73be7af93 100644 --- a/Documentation/networking/index.rst +++ b/Documentation/networking/index.rst @@ -16,6 +16,7 @@ Contents: devlink-info-versions devlink-trap devlink-trap-netdevsim + ethtool-netlink ieee802154 j1939 kapi diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h new file mode 100644 index 000000000000..f27e92b5f344 --- /dev/null +++ b/include/linux/ethtool_netlink.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _LINUX_ETHTOOL_NETLINK_H_ +#define _LINUX_ETHTOOL_NETLINK_H_ + +#include +#include + +#endif /* _LINUX_ETHTOOL_NETLINK_H_ */ diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h new file mode 100644 index 000000000000..3c93276ba066 --- /dev/null +++ b/include/uapi/linux/ethtool_netlink.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * include/uapi/linux/ethtool_netlink.h - netlink interface for ethtool + * + * See Documentation/networking/ethtool-netlink.txt in kernel source tree for + * doucumentation of the interface. + */ + +#ifndef _UAPI_LINUX_ETHTOOL_NETLINK_H_ +#define _UAPI_LINUX_ETHTOOL_NETLINK_H_ + +#include + +/* message types - userspace to kernel */ +enum { + ETHTOOL_MSG_USER_NONE, + + /* add new constants above here */ + __ETHTOOL_MSG_USER_CNT, + ETHTOOL_MSG_USER_MAX = __ETHTOOL_MSG_USER_CNT - 1 +}; + +/* message types - kernel to userspace */ +enum { + ETHTOOL_MSG_KERNEL_NONE, + + /* add new constants above here */ + __ETHTOOL_MSG_KERNEL_CNT, + ETHTOOL_MSG_KERNEL_MAX = __ETHTOOL_MSG_KERNEL_CNT - 1 +}; + +/* generic netlink info */ +#define ETHTOOL_GENL_NAME "ethtool" +#define ETHTOOL_GENL_VERSION 1 + +#endif /* _UAPI_LINUX_ETHTOOL_NETLINK_H_ */ diff --git a/net/Kconfig b/net/Kconfig index 52af65e5d28c..54916b7adb9b 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -449,6 +449,14 @@ config FAILOVER migration of VMs with direct attached VFs by failing over to the paravirtual datapath when the VF is unplugged. +config ETHTOOL_NETLINK + bool "Netlink interface for ethtool" + default y + help + An alternative userspace interface for ethtool based on generic + netlink. It provides better extensibility and some new features, + e.g. notification messages. + endif # if NET # Used by archs to tell that they support BPF JIT compiler plus which flavour. diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index f68387618973..59d5ee230c29 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -1,3 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-y += ioctl.o common.o +obj-y += ioctl.o common.o + +obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o + +ethtool_nl-y := netlink.o diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c new file mode 100644 index 000000000000..59e1ebde2f15 --- /dev/null +++ b/net/ethtool/netlink.c @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include "netlink.h" + +/* genetlink setup */ + +static const struct genl_ops ethtool_genl_ops[] = { +}; + +static struct genl_family ethtool_genl_family = { + .name = ETHTOOL_GENL_NAME, + .version = ETHTOOL_GENL_VERSION, + .netnsok = true, + .parallel_ops = true, + .ops = ethtool_genl_ops, + .n_ops = ARRAY_SIZE(ethtool_genl_ops), +}; + +/* module setup */ + +static int __init ethnl_init(void) +{ + int ret; + + ret = genl_register_family(ðtool_genl_family); + if (WARN(ret < 0, "ethtool: genetlink family registration failed")) + return ret; + + return 0; +} + +subsys_initcall(ethnl_init); diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h new file mode 100644 index 000000000000..e4220780d368 --- /dev/null +++ b/net/ethtool/netlink.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _NET_ETHTOOL_NETLINK_H +#define _NET_ETHTOOL_NETLINK_H + +#include +#include +#include + +#endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From 041b1c5d4a53e97fc9e029ae32469552ca12cb9b Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:23 +0100 Subject: ethtool: helper functions for netlink interface Add common request/reply header definition and helpers to parse request header and fill reply header. Provide ethnl_update_* helpers to update structure members from request attributes (to be used for *_SET requests). Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- include/uapi/linux/ethtool_netlink.h | 21 ++++ net/ethtool/netlink.c | 166 ++++++++++++++++++++++++++++ net/ethtool/netlink.h | 205 +++++++++++++++++++++++++++++++++++ 3 files changed, 392 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 3c93276ba066..82fc3b5f41c9 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -29,6 +29,27 @@ enum { ETHTOOL_MSG_KERNEL_MAX = __ETHTOOL_MSG_KERNEL_CNT - 1 }; +/* request header */ + +/* use compact bitsets in reply */ +#define ETHTOOL_FLAG_COMPACT_BITSETS (1 << 0) +/* provide optional reply for SET or ACT requests */ +#define ETHTOOL_FLAG_OMIT_REPLY (1 << 1) + +#define ETHTOOL_FLAG_ALL (ETHTOOL_FLAG_COMPACT_BITSETS | \ + ETHTOOL_FLAG_OMIT_REPLY) + +enum { + ETHTOOL_A_HEADER_UNSPEC, + ETHTOOL_A_HEADER_DEV_INDEX, /* u32 */ + ETHTOOL_A_HEADER_DEV_NAME, /* string */ + ETHTOOL_A_HEADER_FLAGS, /* u32 - ETHTOOL_FLAG_* */ + + /* add new constants above here */ + __ETHTOOL_A_HEADER_CNT, + ETHTOOL_A_HEADER_MAX = __ETHTOOL_A_HEADER_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 59e1ebde2f15..aef882e0c3f5 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -1,8 +1,174 @@ // SPDX-License-Identifier: GPL-2.0-only +#include #include #include "netlink.h" +static struct genl_family ethtool_genl_family; + +static const struct nla_policy ethnl_header_policy[ETHTOOL_A_HEADER_MAX + 1] = { + [ETHTOOL_A_HEADER_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_HEADER_DEV_INDEX] = { .type = NLA_U32 }, + [ETHTOOL_A_HEADER_DEV_NAME] = { .type = NLA_NUL_STRING, + .len = ALTIFNAMSIZ - 1 }, + [ETHTOOL_A_HEADER_FLAGS] = { .type = NLA_U32 }, +}; + +/** + * ethnl_parse_header() - parse request header + * @req_info: structure to put results into + * @header: nest attribute with request header + * @net: request netns + * @extack: netlink extack for error reporting + * @require_dev: fail if no device identified in header + * + * Parse request header in nested attribute @nest and puts results into + * the structure pointed to by @req_info. Extack from @info is used for error + * reporting. If req_info->dev is not null on return, reference to it has + * been taken. If error is returned, *req_info is null initialized and no + * reference is held. + * + * Return: 0 on success or negative error code + */ +int ethnl_parse_header(struct ethnl_req_info *req_info, + const struct nlattr *header, struct net *net, + struct netlink_ext_ack *extack, bool require_dev) +{ + struct nlattr *tb[ETHTOOL_A_HEADER_MAX + 1]; + const struct nlattr *devname_attr; + struct net_device *dev = NULL; + int ret; + + if (!header) { + NL_SET_ERR_MSG(extack, "request header missing"); + return -EINVAL; + } + ret = nla_parse_nested(tb, ETHTOOL_A_HEADER_MAX, header, + ethnl_header_policy, extack); + if (ret < 0) + return ret; + devname_attr = tb[ETHTOOL_A_HEADER_DEV_NAME]; + + if (tb[ETHTOOL_A_HEADER_DEV_INDEX]) { + u32 ifindex = nla_get_u32(tb[ETHTOOL_A_HEADER_DEV_INDEX]); + + dev = dev_get_by_index(net, ifindex); + if (!dev) { + NL_SET_ERR_MSG_ATTR(extack, + tb[ETHTOOL_A_HEADER_DEV_INDEX], + "no device matches ifindex"); + return -ENODEV; + } + /* if both ifindex and ifname are passed, they must match */ + if (devname_attr && + strncmp(dev->name, nla_data(devname_attr), IFNAMSIZ)) { + dev_put(dev); + NL_SET_ERR_MSG_ATTR(extack, header, + "ifindex and name do not match"); + return -ENODEV; + } + } else if (devname_attr) { + dev = dev_get_by_name(net, nla_data(devname_attr)); + if (!dev) { + NL_SET_ERR_MSG_ATTR(extack, devname_attr, + "no device matches name"); + return -ENODEV; + } + } else if (require_dev) { + NL_SET_ERR_MSG_ATTR(extack, header, + "neither ifindex nor name specified"); + return -EINVAL; + } + + if (dev && !netif_device_present(dev)) { + dev_put(dev); + NL_SET_ERR_MSG(extack, "device not present"); + return -ENODEV; + } + + req_info->dev = dev; + if (tb[ETHTOOL_A_HEADER_FLAGS]) + req_info->flags = nla_get_u32(tb[ETHTOOL_A_HEADER_FLAGS]); + + return 0; +} + +/** + * ethnl_fill_reply_header() - Put common header into a reply message + * @skb: skb with the message + * @dev: network device to describe in header + * @attrtype: attribute type to use for the nest + * + * Create a nested attribute with attributes describing given network device. + * + * Return: 0 on success, error value (-EMSGSIZE only) on error + */ +int ethnl_fill_reply_header(struct sk_buff *skb, struct net_device *dev, + u16 attrtype) +{ + struct nlattr *nest; + + if (!dev) + return 0; + nest = nla_nest_start(skb, attrtype); + if (!nest) + return -EMSGSIZE; + + if (nla_put_u32(skb, ETHTOOL_A_HEADER_DEV_INDEX, (u32)dev->ifindex) || + nla_put_string(skb, ETHTOOL_A_HEADER_DEV_NAME, dev->name)) + goto nla_put_failure; + /* If more attributes are put into reply header, ethnl_header_size() + * must be updated to account for them. + */ + + nla_nest_end(skb, nest); + return 0; + +nla_put_failure: + nla_nest_cancel(skb, nest); + return -EMSGSIZE; +} + +/** + * ethnl_reply_init() - Create skb for a reply and fill device identification + * @payload: payload length (without netlink and genetlink header) + * @dev: device the reply is about (may be null) + * @cmd: ETHTOOL_MSG_* message type for reply + * @info: genetlink info of the received packet we respond to + * @ehdrp: place to store payload pointer returned by genlmsg_new() + * + * Return: pointer to allocated skb on success, NULL on error + */ +struct sk_buff *ethnl_reply_init(size_t payload, struct net_device *dev, u8 cmd, + u16 hdr_attrtype, struct genl_info *info, + void **ehdrp) +{ + struct sk_buff *skb; + + skb = genlmsg_new(payload, GFP_KERNEL); + if (!skb) + goto err; + *ehdrp = genlmsg_put_reply(skb, info, ðtool_genl_family, 0, cmd); + if (!*ehdrp) + goto err_free; + + if (dev) { + int ret; + + ret = ethnl_fill_reply_header(skb, dev, hdr_attrtype); + if (ret < 0) + goto err_free; + } + return skb; + +err_free: + nlmsg_free(skb); +err: + if (info) + GENL_SET_ERR_MSG(info, "failed to setup reply message"); + return NULL; +} + /* genetlink setup */ static const struct genl_ops ethtool_genl_ops[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index e4220780d368..05d7183da894 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -6,5 +6,210 @@ #include #include #include +#include + +struct ethnl_req_info; + +int ethnl_parse_header(struct ethnl_req_info *req_info, + const struct nlattr *nest, struct net *net, + struct netlink_ext_ack *extack, bool require_dev); +int ethnl_fill_reply_header(struct sk_buff *skb, struct net_device *dev, + u16 attrtype); +struct sk_buff *ethnl_reply_init(size_t payload, struct net_device *dev, u8 cmd, + u16 hdr_attrtype, struct genl_info *info, + void **ehdrp); + +/** + * ethnl_strz_size() - calculate attribute length for fixed size string + * @s: ETH_GSTRING_LEN sized string (may not be null terminated) + * + * Return: total length of an attribute with null terminated string from @s + */ +static inline int ethnl_strz_size(const char *s) +{ + return nla_total_size(strnlen(s, ETH_GSTRING_LEN) + 1); +} + +/** + * ethnl_put_strz() - put string attribute with fixed size string + * @skb: skb with the message + * @attrype: attribute type + * @s: ETH_GSTRING_LEN sized string (may not be null terminated) + * + * Puts an attribute with null terminated string from @s into the message. + * + * Return: 0 on success, negative error code on failure + */ +static inline int ethnl_put_strz(struct sk_buff *skb, u16 attrtype, + const char *s) +{ + unsigned int len = strnlen(s, ETH_GSTRING_LEN); + struct nlattr *attr; + + attr = nla_reserve(skb, attrtype, len + 1); + if (!attr) + return -EMSGSIZE; + + memcpy(nla_data(attr), s, len); + ((char *)nla_data(attr))[len] = '\0'; + return 0; +} + +/** + * ethnl_update_u32() - update u32 value from NLA_U32 attribute + * @dst: value to update + * @attr: netlink attribute with new value or null + * @mod: pointer to bool for modification tracking + * + * Copy the u32 value from NLA_U32 netlink attribute @attr into variable + * pointed to by @dst; do nothing if @attr is null. Bool pointed to by @mod + * is set to true if this function changed the value of *dst, otherwise it + * is left as is. + */ +static inline void ethnl_update_u32(u32 *dst, const struct nlattr *attr, + bool *mod) +{ + u32 val; + + if (!attr) + return; + val = nla_get_u32(attr); + if (*dst == val) + return; + + *dst = val; + *mod = true; +} + +/** + * ethnl_update_u8() - update u8 value from NLA_U8 attribute + * @dst: value to update + * @attr: netlink attribute with new value or null + * @mod: pointer to bool for modification tracking + * + * Copy the u8 value from NLA_U8 netlink attribute @attr into variable + * pointed to by @dst; do nothing if @attr is null. Bool pointed to by @mod + * is set to true if this function changed the value of *dst, otherwise it + * is left as is. + */ +static inline void ethnl_update_u8(u8 *dst, const struct nlattr *attr, + bool *mod) +{ + u8 val; + + if (!attr) + return; + val = nla_get_u8(attr); + if (*dst == val) + return; + + *dst = val; + *mod = true; +} + +/** + * ethnl_update_bool32() - update u32 used as bool from NLA_U8 attribute + * @dst: value to update + * @attr: netlink attribute with new value or null + * @mod: pointer to bool for modification tracking + * + * Use the u8 value from NLA_U8 netlink attribute @attr to set u32 variable + * pointed to by @dst to 0 (if zero) or 1 (if not); do nothing if @attr is + * null. Bool pointed to by @mod is set to true if this function changed the + * logical value of *dst, otherwise it is left as is. + */ +static inline void ethnl_update_bool32(u32 *dst, const struct nlattr *attr, + bool *mod) +{ + u8 val; + + if (!attr) + return; + val = !!nla_get_u8(attr); + if (!!*dst == val) + return; + + *dst = val; + *mod = true; +} + +/** + * ethnl_update_binary() - update binary data from NLA_BINARY atribute + * @dst: value to update + * @len: destination buffer length + * @attr: netlink attribute with new value or null + * @mod: pointer to bool for modification tracking + * + * Use the u8 value from NLA_U8 netlink attribute @attr to rewrite data block + * of length @len at @dst by attribute payload; do nothing if @attr is null. + * Bool pointed to by @mod is set to true if this function changed the logical + * value of *dst, otherwise it is left as is. + */ +static inline void ethnl_update_binary(void *dst, unsigned int len, + const struct nlattr *attr, bool *mod) +{ + if (!attr) + return; + if (nla_len(attr) < len) + len = nla_len(attr); + if (!memcmp(dst, nla_data(attr), len)) + return; + + memcpy(dst, nla_data(attr), len); + *mod = true; +} + +/** + * ethnl_update_bitfield32() - update u32 value from NLA_BITFIELD32 attribute + * @dst: value to update + * @attr: netlink attribute with new value or null + * @mod: pointer to bool for modification tracking + * + * Update bits in u32 value which are set in attribute's mask to values from + * attribute's value. Do nothing if @attr is null or the value wouldn't change; + * otherwise, set bool pointed to by @mod to true. + */ +static inline void ethnl_update_bitfield32(u32 *dst, const struct nlattr *attr, + bool *mod) +{ + struct nla_bitfield32 change; + u32 newval; + + if (!attr) + return; + change = nla_get_bitfield32(attr); + newval = (*dst & ~change.selector) | (change.value & change.selector); + if (*dst == newval) + return; + + *dst = newval; + *mod = true; +} + +/** + * ethnl_reply_header_size() - total size of reply header + * + * This is an upper estimate so that we do not need to hold RTNL lock longer + * than necessary (to prevent rename between size estimate and composing the + * message). Accounts only for device ifindex and name as those are the only + * attributes ethnl_fill_reply_header() puts into the reply header. + */ +static inline unsigned int ethnl_reply_header_size(void) +{ + return nla_total_size(nla_total_size(sizeof(u32)) + + nla_total_size(IFNAMSIZ)); +} + +/** + * struct ethnl_req_info - base type of request information for GET requests + * @dev: network device the request is for (may be null) + * @flags: request flags common for all request types + * + * This is a common base, additional members may follow after this structure. + */ +struct ethnl_req_info { + struct net_device *dev; + u32 flags; +}; #endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From 10b518d4e6dd5390e40f7d8de0f08753c1195a7e Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:28 +0100 Subject: ethtool: netlink bitset handling The ethtool netlink code uses common framework for passing arbitrary length bit sets to allow future extensions. A bitset can be a list (only one bitmap) or can consist of value and mask pair (used e.g. when client want to modify only some bits). A bitset can use one of two formats: verbose (bit by bit) or compact. Verbose format consists of bitset size (number of bits), list flag and an array of bit nests, telling which bits are part of the list or which bits are in the mask and which of them are to be set. In requests, bits can be identified by index (position) or by name. In replies, kernel provides both index and name. Verbose format is suitable for "one shot" applications like standard ethtool command as it avoids the need to either keep bit names (e.g. link modes) in sync with kernel or having to add an extra roundtrip for string set request (e.g. for private flags). Compact format uses one (list) or two (value/mask) arrays of 32-bit words to store the bitmap(s). It is more suitable for long running applications (ethtool in monitor mode or network management daemons) which can retrieve the names once and then pass only compact bitmaps to save space. Userspace requests can use either format; ETHTOOL_FLAG_COMPACT_BITSETS flag in request header tells kernel which format to use in reply. Notifications always use compact format. As some code uses arrays of unsigned long for internal representation and some arrays of u32 (or even a single u32), two sets of parse/compose helpers are introduced. To avoid code duplication, helpers for unsigned long arrays are implemented as wrappers around helpers for u32 arrays. There are two reasons for this choice: (1) u32 arrays are more frequent in ethtool code and (2) unsigned long array can be always interpreted as an u32 array on little endian 64-bit and all 32-bit architectures while we would need special handling for odd number of u32 words in the opposite direction. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 84 +++ include/uapi/linux/ethtool_netlink.h | 35 ++ net/ethtool/Makefile | 2 +- net/ethtool/bitset.c | 735 +++++++++++++++++++++++++++ net/ethtool/bitset.h | 28 + 5 files changed, 883 insertions(+), 1 deletion(-) create mode 100644 net/ethtool/bitset.c create mode 100644 net/ethtool/bitset.h (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 9448442ad293..7797f1237472 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -76,6 +76,90 @@ of the flag should be interpreted the way the client expects. A client must not set flags it does not understand. +Bit sets +======== + +For short bitmaps of (reasonably) fixed length, standard ``NLA_BITFIELD32`` +type is used. For arbitrary length bitmaps, ethtool netlink uses a nested +attribute with contents of one of two forms: compact (two binary bitmaps +representing bit values and mask of affected bits) and bit-by-bit (list of +bits identified by either index or name). + +Verbose (bit-by-bit) bitsets allow sending symbolic names for bits together +with their values which saves a round trip (when the bitset is passed in a +request) or at least a second request (when the bitset is in a reply). This is +useful for one shot applications like traditional ethtool command. On the +other hand, long running applications like ethtool monitor (displaying +notifications) or network management daemons may prefer fetching the names +only once and using compact form to save message size. Notifications from +ethtool netlink interface always use compact form for bitsets. + +A bitset can represent either a value/mask pair (``ETHTOOL_A_BITSET_NOMASK`` +not set) or a single bitmap (``ETHTOOL_A_BITSET_NOMASK`` set). In requests +modifying a bitmap, the former changes the bit set in mask to values set in +value and preserves the rest; the latter sets the bits set in the bitmap and +clears the rest. + +Compact form: nested (bitset) atrribute contents: + + ============================ ====== ============================ + ``ETHTOOL_A_BITSET_NOMASK`` flag no mask, only a list + ``ETHTOOL_A_BITSET_SIZE`` u32 number of significant bits + ``ETHTOOL_A_BITSET_VALUE`` binary bitmap of bit values + ``ETHTOOL_A_BITSET_MASK`` binary bitmap of valid bits + ============================ ====== ============================ + +Value and mask must have length at least ``ETHTOOL_A_BITSET_SIZE`` bits +rounded up to a multiple of 32 bits. They consist of 32-bit words in host byte +order, words ordered from least significant to most significant (i.e. the same +way as bitmaps are passed with ioctl interface). + +For compact form, ``ETHTOOL_A_BITSET_SIZE`` and ``ETHTOOL_A_BITSET_VALUE`` are +mandatory. ``ETHTOOL_A_BITSET_MASK`` attribute is mandatory if +``ETHTOOL_A_BITSET_NOMASK`` is not set (bitset represents a value/mask pair); +if ``ETHTOOL_A_BITSET_NOMASK`` is not set, ``ETHTOOL_A_BITSET_MASK`` is not +allowed (bitset represents a single bitmap. + +Kernel bit set length may differ from userspace length if older application is +used on newer kernel or vice versa. If userspace bitmap is longer, an error is +issued only if the request actually tries to set values of some bits not +recognized by kernel. + +Bit-by-bit form: nested (bitset) attribute contents: + + +------------------------------------+--------+-----------------------------+ + | ``ETHTOOL_A_BITSET_NOMASK`` | flag | no mask, only a list | + +------------------------------------+--------+-----------------------------+ + | ``ETHTOOL_A_BITSET_SIZE`` | u32 | number of significant bits | + +------------------------------------+--------+-----------------------------+ + | ``ETHTOOL_A_BITSET_BITS`` | nested | array of bits | + +-+----------------------------------+--------+-----------------------------+ + | | ``ETHTOOL_A_BITSET_BITS_BIT+`` | nested | one bit | + +-+-+--------------------------------+--------+-----------------------------+ + | | | ``ETHTOOL_A_BITSET_BIT_INDEX`` | u32 | bit index (0 for LSB) | + +-+-+--------------------------------+--------+-----------------------------+ + | | | ``ETHTOOL_A_BITSET_BIT_NAME`` | string | bit name | + +-+-+--------------------------------+--------+-----------------------------+ + | | | ``ETHTOOL_A_BITSET_BIT_VALUE`` | flag | present if bit is set | + +-+-+--------------------------------+--------+-----------------------------+ + +Bit size is optional for bit-by-bit form. ``ETHTOOL_A_BITSET_BITS`` nest can +only contain ``ETHTOOL_A_BITSET_BITS_BIT`` attributes but there can be an +arbitrary number of them. A bit may be identified by its index or by its +name. When used in requests, listed bits are set to 0 or 1 according to +``ETHTOOL_A_BITSET_BIT_VALUE``, the rest is preserved. A request fails if +index exceeds kernel bit length or if name is not recognized. + +When ``ETHTOOL_A_BITSET_NOMASK`` flag is present, bitset is interpreted as +a simple bitmap. ``ETHTOOL_A_BITSET_BIT_VALUE`` attributes are not used in +such case. Such bitset represents a bitmap with listed bits set and the rest +zero. + +In requests, application can use either form. Form used by kernel in reply is +determined by ``ETHTOOL_FLAG_COMPACT_BITSETS`` flag in flags field of request +header. Semantics of value and mask depends on the attribute. + + List of message types ===================== diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 82fc3b5f41c9..951203049615 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -50,6 +50,41 @@ enum { ETHTOOL_A_HEADER_MAX = __ETHTOOL_A_HEADER_CNT - 1 }; +/* bit sets */ + +enum { + ETHTOOL_A_BITSET_BIT_UNSPEC, + ETHTOOL_A_BITSET_BIT_INDEX, /* u32 */ + ETHTOOL_A_BITSET_BIT_NAME, /* string */ + ETHTOOL_A_BITSET_BIT_VALUE, /* flag */ + + /* add new constants above here */ + __ETHTOOL_A_BITSET_BIT_CNT, + ETHTOOL_A_BITSET_BIT_MAX = __ETHTOOL_A_BITSET_BIT_CNT - 1 +}; + +enum { + ETHTOOL_A_BITSET_BITS_UNSPEC, + ETHTOOL_A_BITSET_BITS_BIT, /* nest - _A_BITSET_BIT_* */ + + /* add new constants above here */ + __ETHTOOL_A_BITSET_BITS_CNT, + ETHTOOL_A_BITSET_BITS_MAX = __ETHTOOL_A_BITSET_BITS_CNT - 1 +}; + +enum { + ETHTOOL_A_BITSET_UNSPEC, + ETHTOOL_A_BITSET_NOMASK, /* flag */ + ETHTOOL_A_BITSET_SIZE, /* u32 */ + ETHTOOL_A_BITSET_BITS, /* nest - _A_BITSET_BITS_* */ + ETHTOOL_A_BITSET_VALUE, /* binary */ + ETHTOOL_A_BITSET_MASK, /* binary */ + + /* add new constants above here */ + __ETHTOOL_A_BITSET_CNT, + ETHTOOL_A_BITSET_MAX = __ETHTOOL_A_BITSET_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index 59d5ee230c29..a7e6c2c85db9 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -4,4 +4,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o -ethtool_nl-y := netlink.o +ethtool_nl-y := netlink.o bitset.o diff --git a/net/ethtool/bitset.c b/net/ethtool/bitset.c new file mode 100644 index 000000000000..fce45dac4205 --- /dev/null +++ b/net/ethtool/bitset.c @@ -0,0 +1,735 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include "netlink.h" +#include "bitset.h" + +/* Some bitmaps are internally represented as an array of unsigned long, some + * as an array of u32 (some even as single u32 for now). To avoid the need of + * wrappers on caller side, we provide two set of functions: those with "32" + * suffix in their names expect u32 based bitmaps, those without it expect + * unsigned long bitmaps. + */ + +static u32 ethnl_lower_bits(unsigned int n) +{ + return ~(u32)0 >> (32 - n % 32); +} + +static u32 ethnl_upper_bits(unsigned int n) +{ + return ~(u32)0 << (n % 32); +} + +/** + * ethnl_bitmap32_clear() - Clear u32 based bitmap + * @dst: bitmap to clear + * @start: beginning of the interval + * @end: end of the interval + * @mod: set if bitmap was modified + * + * Clear @nbits bits of a bitmap with indices @start <= i < @end + */ +static void ethnl_bitmap32_clear(u32 *dst, unsigned int start, unsigned int end, + bool *mod) +{ + unsigned int start_word = start / 32; + unsigned int end_word = end / 32; + unsigned int i; + u32 mask; + + if (end <= start) + return; + + if (start % 32) { + mask = ethnl_upper_bits(start); + if (end_word == start_word) { + mask &= ethnl_lower_bits(end); + if (dst[start_word] & mask) { + dst[start_word] &= ~mask; + *mod = true; + } + return; + } + if (dst[start_word] & mask) { + dst[start_word] &= ~mask; + *mod = true; + } + start_word++; + } + + for (i = start_word; i < end_word; i++) { + if (dst[i]) { + dst[i] = 0; + *mod = true; + } + } + if (end % 32) { + mask = ethnl_lower_bits(end); + if (dst[end_word] & mask) { + dst[end_word] &= ~mask; + *mod = true; + } + } +} + +/** + * ethnl_bitmap32_not_zero() - Check if any bit is set in an interval + * @map: bitmap to test + * @start: beginning of the interval + * @end: end of the interval + * + * Return: true if there is non-zero bit with index @start <= i < @end, + * false if the whole interval is zero + */ +static bool ethnl_bitmap32_not_zero(const u32 *map, unsigned int start, + unsigned int end) +{ + unsigned int start_word = start / 32; + unsigned int end_word = end / 32; + u32 mask; + + if (end <= start) + return true; + + if (start % 32) { + mask = ethnl_upper_bits(start); + if (end_word == start_word) { + mask &= ethnl_lower_bits(end); + return map[start_word] & mask; + } + if (map[start_word] & mask) + return true; + start_word++; + } + + if (!memchr_inv(map + start_word, '\0', + (end_word - start_word) * sizeof(u32))) + return true; + if (end % 32 == 0) + return true; + return map[end_word] & ethnl_lower_bits(end); +} + +/** + * ethnl_bitmap32_update() - Modify u32 based bitmap according to value/mask + * pair + * @dst: bitmap to update + * @nbits: bit size of the bitmap + * @value: values to set + * @mask: mask of bits to set + * @mod: set to true if bitmap is modified, preserve if not + * + * Set bits in @dst bitmap which are set in @mask to values from @value, leave + * the rest untouched. If destination bitmap was modified, set @mod to true, + * leave as it is if not. + */ +static void ethnl_bitmap32_update(u32 *dst, unsigned int nbits, + const u32 *value, const u32 *mask, bool *mod) +{ + while (nbits > 0) { + u32 real_mask = mask ? *mask : ~(u32)0; + u32 new_value; + + if (nbits < 32) + real_mask &= ethnl_lower_bits(nbits); + new_value = (*dst & ~real_mask) | (*value & real_mask); + if (new_value != *dst) { + *dst = new_value; + *mod = true; + } + + if (nbits <= 32) + break; + dst++; + nbits -= 32; + value++; + if (mask) + mask++; + } +} + +static bool ethnl_bitmap32_test_bit(const u32 *map, unsigned int index) +{ + return map[index / 32] & (1U << (index % 32)); +} + +/** + * ethnl_bitset32_size() - Calculate size of bitset nested attribute + * @val: value bitmap (u32 based) + * @mask: mask bitmap (u32 based, optional) + * @nbits: bit length of the bitset + * @names: array of bit names (optional) + * @compact: assume compact format for output + * + * Estimate length of netlink attribute composed by a later call to + * ethnl_put_bitset32() call with the same arguments. + * + * Return: negative error code or attribute length estimate + */ +int ethnl_bitset32_size(const u32 *val, const u32 *mask, unsigned int nbits, + ethnl_string_array_t names, bool compact) +{ + unsigned int len = 0; + + /* list flag */ + if (!mask) + len += nla_total_size(sizeof(u32)); + /* size */ + len += nla_total_size(sizeof(u32)); + + if (compact) { + unsigned int nwords = DIV_ROUND_UP(nbits, 32); + + /* value, mask */ + len += (mask ? 2 : 1) * nla_total_size(nwords * sizeof(u32)); + } else { + unsigned int bits_len = 0; + unsigned int bit_len, i; + + for (i = 0; i < nbits; i++) { + const char *name = names ? names[i] : NULL; + + if (!ethnl_bitmap32_test_bit(mask ?: val, i)) + continue; + /* index */ + bit_len = nla_total_size(sizeof(u32)); + /* name */ + if (name) + bit_len += ethnl_strz_size(name); + /* value */ + if (mask && ethnl_bitmap32_test_bit(val, i)) + bit_len += nla_total_size(0); + + /* bit nest */ + bits_len += nla_total_size(bit_len); + } + /* bits nest */ + len += nla_total_size(bits_len); + } + + /* outermost nest */ + return nla_total_size(len); +} + +/** + * ethnl_put_bitset32() - Put a bitset nest into a message + * @skb: skb with the message + * @attrtype: attribute type for the bitset nest + * @val: value bitmap (u32 based) + * @mask: mask bitmap (u32 based, optional) + * @nbits: bit length of the bitset + * @names: array of bit names (optional) + * @compact: use compact format for the output + * + * Compose a nested attribute representing a bitset. If @mask is null, simple + * bitmap (bit list) is created, if @mask is provided, represent a value/mask + * pair. Bit names are only used in verbose mode and when provided by calller. + * + * Return: 0 on success, negative error value on error + */ +int ethnl_put_bitset32(struct sk_buff *skb, int attrtype, const u32 *val, + const u32 *mask, unsigned int nbits, + ethnl_string_array_t names, bool compact) +{ + struct nlattr *nest; + struct nlattr *attr; + + nest = nla_nest_start(skb, attrtype); + if (!nest) + return -EMSGSIZE; + + if (!mask && nla_put_flag(skb, ETHTOOL_A_BITSET_NOMASK)) + goto nla_put_failure; + if (nla_put_u32(skb, ETHTOOL_A_BITSET_SIZE, nbits)) + goto nla_put_failure; + if (compact) { + unsigned int nwords = DIV_ROUND_UP(nbits, 32); + unsigned int nbytes = nwords * sizeof(u32); + u32 *dst; + + attr = nla_reserve(skb, ETHTOOL_A_BITSET_VALUE, nbytes); + if (!attr) + goto nla_put_failure; + dst = nla_data(attr); + memcpy(dst, val, nbytes); + if (nbits % 32) + dst[nwords - 1] &= ethnl_lower_bits(nbits); + + if (mask) { + attr = nla_reserve(skb, ETHTOOL_A_BITSET_MASK, nbytes); + if (!attr) + goto nla_put_failure; + dst = nla_data(attr); + memcpy(dst, mask, nbytes); + if (nbits % 32) + dst[nwords - 1] &= ethnl_lower_bits(nbits); + } + } else { + struct nlattr *bits; + unsigned int i; + + bits = nla_nest_start(skb, ETHTOOL_A_BITSET_BITS); + if (!bits) + goto nla_put_failure; + for (i = 0; i < nbits; i++) { + const char *name = names ? names[i] : NULL; + + if (!ethnl_bitmap32_test_bit(mask ?: val, i)) + continue; + attr = nla_nest_start(skb, ETHTOOL_A_BITSET_BITS_BIT); + if (!attr) + goto nla_put_failure; + if (nla_put_u32(skb, ETHTOOL_A_BITSET_BIT_INDEX, i)) + goto nla_put_failure; + if (name && + ethnl_put_strz(skb, ETHTOOL_A_BITSET_BIT_NAME, name)) + goto nla_put_failure; + if (mask && ethnl_bitmap32_test_bit(val, i) && + nla_put_flag(skb, ETHTOOL_A_BITSET_BIT_VALUE)) + goto nla_put_failure; + nla_nest_end(skb, attr); + } + nla_nest_end(skb, bits); + } + + nla_nest_end(skb, nest); + return 0; + +nla_put_failure: + nla_nest_cancel(skb, nest); + return -EMSGSIZE; +} + +static const struct nla_policy bitset_policy[ETHTOOL_A_BITSET_MAX + 1] = { + [ETHTOOL_A_BITSET_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_BITSET_NOMASK] = { .type = NLA_FLAG }, + [ETHTOOL_A_BITSET_SIZE] = { .type = NLA_U32 }, + [ETHTOOL_A_BITSET_BITS] = { .type = NLA_NESTED }, + [ETHTOOL_A_BITSET_VALUE] = { .type = NLA_BINARY }, + [ETHTOOL_A_BITSET_MASK] = { .type = NLA_BINARY }, +}; + +static const struct nla_policy bit_policy[ETHTOOL_A_BITSET_BIT_MAX + 1] = { + [ETHTOOL_A_BITSET_BIT_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_BITSET_BIT_INDEX] = { .type = NLA_U32 }, + [ETHTOOL_A_BITSET_BIT_NAME] = { .type = NLA_NUL_STRING }, + [ETHTOOL_A_BITSET_BIT_VALUE] = { .type = NLA_FLAG }, +}; + +/** + * ethnl_bitset_is_compact() - check if bitset attribute represents a compact + * bitset + * @bitset: nested attribute representing a bitset + * @compact: pointer for return value + * + * Return: 0 on success, negative error code on failure + */ +int ethnl_bitset_is_compact(const struct nlattr *bitset, bool *compact) +{ + struct nlattr *tb[ETHTOOL_A_BITSET_MAX + 1]; + int ret; + + ret = nla_parse_nested(tb, ETHTOOL_A_BITSET_MAX, bitset, + bitset_policy, NULL); + if (ret < 0) + return ret; + + if (tb[ETHTOOL_A_BITSET_BITS]) { + if (tb[ETHTOOL_A_BITSET_VALUE] || tb[ETHTOOL_A_BITSET_MASK]) + return -EINVAL; + *compact = false; + return 0; + } + if (!tb[ETHTOOL_A_BITSET_SIZE] || !tb[ETHTOOL_A_BITSET_VALUE]) + return -EINVAL; + + *compact = true; + return 0; +} + +/** + * ethnl_name_to_idx() - look up string index for a name + * @names: array of ETH_GSTRING_LEN sized strings + * @n_names: number of strings in the array + * @name: name to look up + * + * Return: index of the string if found, -ENOENT if not found + */ +static int ethnl_name_to_idx(ethnl_string_array_t names, unsigned int n_names, + const char *name) +{ + unsigned int i; + + if (!names) + return -ENOENT; + + for (i = 0; i < n_names; i++) { + /* names[i] may not be null terminated */ + if (!strncmp(names[i], name, ETH_GSTRING_LEN) && + strlen(name) <= ETH_GSTRING_LEN) + return i; + } + + return -ENOENT; +} + +static int ethnl_parse_bit(unsigned int *index, bool *val, unsigned int nbits, + const struct nlattr *bit_attr, bool no_mask, + ethnl_string_array_t names, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[ETHTOOL_A_BITSET_BIT_MAX + 1]; + int ret, idx; + + ret = nla_parse_nested(tb, ETHTOOL_A_BITSET_BIT_MAX, bit_attr, + bit_policy, extack); + if (ret < 0) + return ret; + + if (tb[ETHTOOL_A_BITSET_BIT_INDEX]) { + const char *name; + + idx = nla_get_u32(tb[ETHTOOL_A_BITSET_BIT_INDEX]); + if (idx >= nbits) { + NL_SET_ERR_MSG_ATTR(extack, + tb[ETHTOOL_A_BITSET_BIT_INDEX], + "bit index too high"); + return -EOPNOTSUPP; + } + name = names ? names[idx] : NULL; + if (tb[ETHTOOL_A_BITSET_BIT_NAME] && name && + strncmp(nla_data(tb[ETHTOOL_A_BITSET_BIT_NAME]), name, + nla_len(tb[ETHTOOL_A_BITSET_BIT_NAME]))) { + NL_SET_ERR_MSG_ATTR(extack, bit_attr, + "bit index and name mismatch"); + return -EINVAL; + } + } else if (tb[ETHTOOL_A_BITSET_BIT_NAME]) { + idx = ethnl_name_to_idx(names, nbits, + nla_data(tb[ETHTOOL_A_BITSET_BIT_NAME])); + if (idx < 0) { + NL_SET_ERR_MSG_ATTR(extack, + tb[ETHTOOL_A_BITSET_BIT_NAME], + "bit name not found"); + return -EOPNOTSUPP; + } + } else { + NL_SET_ERR_MSG_ATTR(extack, bit_attr, + "neither bit index nor name specified"); + return -EINVAL; + } + + *index = idx; + *val = no_mask || tb[ETHTOOL_A_BITSET_BIT_VALUE]; + return 0; +} + +static int +ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits, + const struct nlattr *attr, struct nlattr **tb, + ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod) +{ + struct nlattr *bit_attr; + bool no_mask; + int rem; + int ret; + + if (tb[ETHTOOL_A_BITSET_VALUE]) { + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_BITSET_VALUE], + "value only allowed in compact bitset"); + return -EINVAL; + } + if (tb[ETHTOOL_A_BITSET_MASK]) { + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_BITSET_MASK], + "mask only allowed in compact bitset"); + return -EINVAL; + } + no_mask = tb[ETHTOOL_A_BITSET_NOMASK]; + + nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) { + bool old_val, new_val; + unsigned int idx; + + if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) { + NL_SET_ERR_MSG_ATTR(extack, bit_attr, + "only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS"); + return -EINVAL; + } + ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask, + names, extack); + if (ret < 0) + return ret; + old_val = bitmap[idx / 32] & ((u32)1 << (idx % 32)); + if (new_val != old_val) { + if (new_val) + bitmap[idx / 32] |= ((u32)1 << (idx % 32)); + else + bitmap[idx / 32] &= ~((u32)1 << (idx % 32)); + *mod = true; + } + } + + return 0; +} + +static int ethnl_compact_sanity_checks(unsigned int nbits, + const struct nlattr *nest, + struct nlattr **tb, + struct netlink_ext_ack *extack) +{ + bool no_mask = tb[ETHTOOL_A_BITSET_NOMASK]; + unsigned int attr_nbits, attr_nwords; + const struct nlattr *test_attr; + + if (no_mask && tb[ETHTOOL_A_BITSET_MASK]) { + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_BITSET_MASK], + "mask not allowed in list bitset"); + return -EINVAL; + } + if (!tb[ETHTOOL_A_BITSET_SIZE]) { + NL_SET_ERR_MSG_ATTR(extack, nest, + "missing size in compact bitset"); + return -EINVAL; + } + if (!tb[ETHTOOL_A_BITSET_VALUE]) { + NL_SET_ERR_MSG_ATTR(extack, nest, + "missing value in compact bitset"); + return -EINVAL; + } + if (!no_mask && !tb[ETHTOOL_A_BITSET_MASK]) { + NL_SET_ERR_MSG_ATTR(extack, nest, + "missing mask in compact nonlist bitset"); + return -EINVAL; + } + + attr_nbits = nla_get_u32(tb[ETHTOOL_A_BITSET_SIZE]); + attr_nwords = DIV_ROUND_UP(attr_nbits, 32); + if (nla_len(tb[ETHTOOL_A_BITSET_VALUE]) != attr_nwords * sizeof(u32)) { + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_BITSET_VALUE], + "bitset value length does not match size"); + return -EINVAL; + } + if (tb[ETHTOOL_A_BITSET_MASK] && + nla_len(tb[ETHTOOL_A_BITSET_MASK]) != attr_nwords * sizeof(u32)) { + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_BITSET_MASK], + "bitset mask length does not match size"); + return -EINVAL; + } + if (attr_nbits <= nbits) + return 0; + + test_attr = no_mask ? tb[ETHTOOL_A_BITSET_VALUE] : + tb[ETHTOOL_A_BITSET_MASK]; + if (ethnl_bitmap32_not_zero(nla_data(test_attr), nbits, attr_nbits)) { + NL_SET_ERR_MSG_ATTR(extack, test_attr, + "cannot modify bits past kernel bitset size"); + return -EINVAL; + } + return 0; +} + +/** + * ethnl_update_bitset32() - Apply a bitset nest to a u32 based bitmap + * @bitmap: bitmap to update + * @nbits: size of the updated bitmap in bits + * @attr: nest attribute to parse and apply + * @names: array of bit names; may be null for compact format + * @extack: extack for error reporting + * @mod: set this to true if bitmap is modified, leave as it is if not + * + * Apply bitset netsted attribute to a bitmap. If the attribute represents + * a bit list, @bitmap is set to its contents; otherwise, bits in mask are + * set to values from value. Bitmaps in the attribute may be longer than + * @nbits but the message must not request modifying any bits past @nbits. + * + * Return: negative error code on failure, 0 on success + */ +int ethnl_update_bitset32(u32 *bitmap, unsigned int nbits, + const struct nlattr *attr, ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod) +{ + struct nlattr *tb[ETHTOOL_A_BITSET_MAX + 1]; + unsigned int change_bits; + bool no_mask; + int ret; + + if (!attr) + return 0; + ret = nla_parse_nested(tb, ETHTOOL_A_BITSET_MAX, attr, bitset_policy, + extack); + if (ret < 0) + return ret; + + if (tb[ETHTOOL_A_BITSET_BITS]) + return ethnl_update_bitset32_verbose(bitmap, nbits, attr, tb, + names, extack, mod); + ret = ethnl_compact_sanity_checks(nbits, attr, tb, extack); + if (ret < 0) + return ret; + + no_mask = tb[ETHTOOL_A_BITSET_NOMASK]; + change_bits = min_t(unsigned int, + nla_get_u32(tb[ETHTOOL_A_BITSET_SIZE]), nbits); + ethnl_bitmap32_update(bitmap, change_bits, + nla_data(tb[ETHTOOL_A_BITSET_VALUE]), + no_mask ? NULL : + nla_data(tb[ETHTOOL_A_BITSET_MASK]), + mod); + if (no_mask && change_bits < nbits) + ethnl_bitmap32_clear(bitmap, change_bits, nbits, mod); + + return 0; +} + +#if BITS_PER_LONG == 64 && defined(__BIG_ENDIAN) + +/* 64-bit big endian architectures are the only case when u32 based bitmaps + * and unsigned long based bitmaps have different memory layout so that we + * cannot simply cast the latter to the former and need actual wrappers + * converting the latter to the former. + * + * To reduce the number of slab allocations, the wrappers use fixed size local + * variables for bitmaps up to ETHNL_SMALL_BITMAP_BITS bits which is the + * majority of bitmaps used by ethtool. + */ +#define ETHNL_SMALL_BITMAP_BITS 128 +#define ETHNL_SMALL_BITMAP_WORDS DIV_ROUND_UP(ETHNL_SMALL_BITMAP_BITS, 32) + +int ethnl_bitset_size(const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact) +{ + u32 small_mask32[ETHNL_SMALL_BITMAP_WORDS]; + u32 small_val32[ETHNL_SMALL_BITMAP_WORDS]; + u32 *mask32; + u32 *val32; + int ret; + + if (nbits > ETHNL_SMALL_BITMAP_BITS) { + unsigned int nwords = DIV_ROUND_UP(nbits, 32); + + val32 = kmalloc_array(2 * nwords, sizeof(u32), GFP_KERNEL); + if (!val32) + return -ENOMEM; + mask32 = val32 + nwords; + } else { + val32 = small_val32; + mask32 = small_mask32; + } + + bitmap_to_arr32(val32, val, nbits); + if (mask) + bitmap_to_arr32(mask32, mask, nbits); + else + mask32 = NULL; + ret = ethnl_bitset32_size(val32, mask32, nbits, names, compact); + + if (nbits > ETHNL_SMALL_BITMAP_BITS) + kfree(val32); + + return ret; +} + +int ethnl_put_bitset(struct sk_buff *skb, int attrtype, + const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact) +{ + u32 small_mask32[ETHNL_SMALL_BITMAP_WORDS]; + u32 small_val32[ETHNL_SMALL_BITMAP_WORDS]; + u32 *mask32; + u32 *val32; + int ret; + + if (nbits > ETHNL_SMALL_BITMAP_BITS) { + unsigned int nwords = DIV_ROUND_UP(nbits, 32); + + val32 = kmalloc_array(2 * nwords, sizeof(u32), GFP_KERNEL); + if (!val32) + return -ENOMEM; + mask32 = val32 + nwords; + } else { + val32 = small_val32; + mask32 = small_mask32; + } + + bitmap_to_arr32(val32, val, nbits); + if (mask) + bitmap_to_arr32(mask32, mask, nbits); + else + mask32 = NULL; + ret = ethnl_put_bitset32(skb, attrtype, val32, mask32, nbits, names, + compact); + + if (nbits > ETHNL_SMALL_BITMAP_BITS) + kfree(val32); + + return ret; +} + +int ethnl_update_bitset(unsigned long *bitmap, unsigned int nbits, + const struct nlattr *attr, ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod) +{ + u32 small_bitmap32[ETHNL_SMALL_BITMAP_WORDS]; + u32 *bitmap32 = small_bitmap32; + bool u32_mod = false; + int ret; + + if (nbits > ETHNL_SMALL_BITMAP_BITS) { + unsigned int dst_words = DIV_ROUND_UP(nbits, 32); + + bitmap32 = kmalloc_array(dst_words, sizeof(u32), GFP_KERNEL); + if (!bitmap32) + return -ENOMEM; + } + + bitmap_to_arr32(bitmap32, bitmap, nbits); + ret = ethnl_update_bitset32(bitmap32, nbits, attr, names, extack, + &u32_mod); + if (u32_mod) { + bitmap_from_arr32(bitmap, bitmap32, nbits); + *mod = true; + } + + if (nbits > ETHNL_SMALL_BITMAP_BITS) + kfree(bitmap32); + + return ret; +} + +#else + +/* On little endian 64-bit and all 32-bit architectures, an unsigned long + * based bitmap can be interpreted as u32 based one using a simple cast. + */ + +int ethnl_bitset_size(const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact) +{ + return ethnl_bitset32_size((const u32 *)val, (const u32 *)mask, nbits, + names, compact); +} + +int ethnl_put_bitset(struct sk_buff *skb, int attrtype, + const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact) +{ + return ethnl_put_bitset32(skb, attrtype, (const u32 *)val, + (const u32 *)mask, nbits, names, compact); +} + +int ethnl_update_bitset(unsigned long *bitmap, unsigned int nbits, + const struct nlattr *attr, ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod) +{ + return ethnl_update_bitset32((u32 *)bitmap, nbits, attr, names, extack, + mod); +} + +#endif /* BITS_PER_LONG == 64 && defined(__BIG_ENDIAN) */ diff --git a/net/ethtool/bitset.h b/net/ethtool/bitset.h new file mode 100644 index 000000000000..b8247e34109d --- /dev/null +++ b/net/ethtool/bitset.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _NET_ETHTOOL_BITSET_H +#define _NET_ETHTOOL_BITSET_H + +typedef const char (*const ethnl_string_array_t)[ETH_GSTRING_LEN]; + +int ethnl_bitset_is_compact(const struct nlattr *bitset, bool *compact); +int ethnl_bitset_size(const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact); +int ethnl_bitset32_size(const u32 *val, const u32 *mask, unsigned int nbits, + ethnl_string_array_t names, bool compact); +int ethnl_put_bitset(struct sk_buff *skb, int attrtype, + const unsigned long *val, const unsigned long *mask, + unsigned int nbits, ethnl_string_array_t names, + bool compact); +int ethnl_put_bitset32(struct sk_buff *skb, int attrtype, const u32 *val, + const u32 *mask, unsigned int nbits, + ethnl_string_array_t names, bool compact); +int ethnl_update_bitset(unsigned long *bitmap, unsigned int nbits, + const struct nlattr *attr, ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod); +int ethnl_update_bitset32(u32 *bitmap, unsigned int nbits, + const struct nlattr *attr, ethnl_string_array_t names, + struct netlink_ext_ack *extack, bool *mod); + +#endif /* _NET_ETHTOOL_BITSET_H */ -- cgit v1.2.3-71-gd317 From 6b08d6c146f4c5ed451c45339c10feb06d619db2 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:33 +0100 Subject: ethtool: support for netlink notifications Add infrastructure for ethtool netlink notifications. There is only one multicast group "monitor" which is used to notify userspace about changes and actions performed. Notification messages (types using suffix _NTF) share the format with replies to GET requests. Notifications are supposed to be broadcasted on every configuration change, whether it is done using the netlink interface or ioctl one. Netlink SET requests only trigger a notification if some data is actually changed. To trigger an ethtool notification, both ethtool netlink and external code use ethtool_notify() helper. This helper requires RTNL to be held and may sleep. Handlers sending messages for specific notification message types are registered in ethnl_notify_handlers array. As notifications can be triggered from other code, ethnl_ok flag is used to prevent an attempt to send notification before genetlink family is registered. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- include/linux/ethtool_netlink.h | 5 +++++ include/linux/netdevice.h | 9 +++++++++ include/uapi/linux/ethtool_netlink.h | 2 ++ net/ethtool/netlink.c | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 48 insertions(+) (limited to 'include') diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h index f27e92b5f344..c98f6852c8eb 100644 --- a/include/linux/ethtool_netlink.h +++ b/include/linux/ethtool_netlink.h @@ -5,5 +5,10 @@ #include #include +#include + +enum ethtool_multicast_groups { + ETHNL_MCGRP_MONITOR, +}; #endif /* _LINUX_ETHTOOL_NETLINK_H_ */ diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 469a297b58c0..f007155ae8f4 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4393,6 +4393,15 @@ struct netdev_notifier_bonding_info { void netdev_bonding_info_change(struct net_device *dev, struct netdev_bonding_info *bonding_info); +#if IS_ENABLED(CONFIG_ETHTOOL_NETLINK) +void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data); +#else +static inline void ethtool_notify(struct net_device *dev, unsigned int cmd, + const void *data) +{ +} +#endif + static inline struct sk_buff *skb_gso_segment(struct sk_buff *skb, netdev_features_t features) { diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 951203049615..d530ccb6f7e6 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -89,4 +89,6 @@ enum { #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 +#define ETHTOOL_MCGRP_MONITOR_NAME "monitor" + #endif /* _UAPI_LINUX_ETHTOOL_NETLINK_H_ */ diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index aef882e0c3f5..c0f25c8f3565 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -6,6 +6,8 @@ static struct genl_family ethtool_genl_family; +static bool ethnl_ok __read_mostly; + static const struct nla_policy ethnl_header_policy[ETHTOOL_A_HEADER_MAX + 1] = { [ETHTOOL_A_HEADER_UNSPEC] = { .type = NLA_REJECT }, [ETHTOOL_A_HEADER_DEV_INDEX] = { .type = NLA_U32 }, @@ -169,11 +171,38 @@ err: return NULL; } +/* notifications */ + +typedef void (*ethnl_notify_handler_t)(struct net_device *dev, unsigned int cmd, + const void *data); + +static const ethnl_notify_handler_t ethnl_notify_handlers[] = { +}; + +void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) +{ + if (unlikely(!ethnl_ok)) + return; + ASSERT_RTNL(); + + if (likely(cmd < ARRAY_SIZE(ethnl_notify_handlers) && + ethnl_notify_handlers[cmd])) + ethnl_notify_handlers[cmd](dev, cmd, data); + else + WARN_ONCE(1, "notification %u not implemented (dev=%s)\n", + cmd, netdev_name(dev)); +} +EXPORT_SYMBOL(ethtool_notify); + /* genetlink setup */ static const struct genl_ops ethtool_genl_ops[] = { }; +static const struct genl_multicast_group ethtool_nl_mcgrps[] = { + [ETHNL_MCGRP_MONITOR] = { .name = ETHTOOL_MCGRP_MONITOR_NAME }, +}; + static struct genl_family ethtool_genl_family = { .name = ETHTOOL_GENL_NAME, .version = ETHTOOL_GENL_VERSION, @@ -181,6 +210,8 @@ static struct genl_family ethtool_genl_family = { .parallel_ops = true, .ops = ethtool_genl_ops, .n_ops = ARRAY_SIZE(ethtool_genl_ops), + .mcgrps = ethtool_nl_mcgrps, + .n_mcgrps = ARRAY_SIZE(ethtool_nl_mcgrps), }; /* module setup */ @@ -192,6 +223,7 @@ static int __init ethnl_init(void) ret = genl_register_family(ðtool_genl_family); if (WARN(ret < 0, "ethtool: genetlink family registration failed")) return ret; + ethnl_ok = true; return 0; } -- cgit v1.2.3-71-gd317 From 71921690f9745fef60a2bad425f30adf8cdc9da0 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:43 +0100 Subject: ethtool: provide string sets with STRSET_GET request Requests a contents of one or more string sets, i.e. indexed arrays of strings; this information is provided by ETHTOOL_GSSET_INFO and ETHTOOL_GSTRINGS commands of ioctl interface. Unlike ioctl interface, all information can be retrieved with one request and mulitple string sets can be requested at once. There are three types of requests: - no NLM_F_DUMP, no device: get "global" stringsets - no NLM_F_DUMP, with device: get string sets related to the device - NLM_F_DUMP, no device: get device related string sets for all devices Client can request either all string sets of given type (global or device related) or only specific sets. With ETHTOOL_A_STRSET_COUNTS flag set, only set sizes (numbers of strings) are returned. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 75 ++++- include/uapi/linux/ethtool.h | 3 + include/uapi/linux/ethtool_netlink.h | 56 ++++ net/ethtool/Makefile | 2 +- net/ethtool/netlink.c | 8 + net/ethtool/netlink.h | 4 + net/ethtool/strset.c | 425 +++++++++++++++++++++++++++ 7 files changed, 570 insertions(+), 3 deletions(-) create mode 100644 net/ethtool/strset.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 7797f1237472..3912cb0eb9c6 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -176,6 +176,18 @@ according to message purpose: ``_NTF`` kernel notification ============== ====================================== +Userspace to kernel: + + ===================================== ================================ + ``ETHTOOL_MSG_STRSET_GET`` get string set + ===================================== ================================ + +Kernel to userspace: + + ===================================== ================================ + ``ETHTOOL_MSG_STRSET_GET_REPLY`` string set contents + ===================================== ================================ + ``GET`` requests are sent by userspace applications to retrieve device information. They usually do not contain any message specific attributes. Kernel replies with corresponding "GET_REPLY" message. For most types, ``GET`` @@ -207,6 +219,65 @@ an ``ACT_REPLY`` message. Performing an action also triggers a notification Later sections describe the format and semantics of these messages. +STRSET_GET +========== + +Requests contents of a string set as provided by ioctl commands +``ETHTOOL_GSSET_INFO`` and ``ETHTOOL_GSTRINGS.`` String sets are not user +writeable so that the corresponding ``STRSET_SET`` message is only used in +kernel replies. There are two types of string sets: global (independent of +a device, e.g. device feature names) and device specific (e.g. device private +flags). + +Request contents: + + +---------------------------------------+--------+------------------------+ + | ``ETHTOOL_A_STRSET_HEADER`` | nested | request header | + +---------------------------------------+--------+------------------------+ + | ``ETHTOOL_A_STRSET_STRINGSETS`` | nested | string set to request | + +-+-------------------------------------+--------+------------------------+ + | | ``ETHTOOL_A_STRINGSETS_STRINGSET+`` | nested | one string set | + +-+-+-----------------------------------+--------+------------------------+ + | | | ``ETHTOOL_A_STRINGSET_ID`` | u32 | set id | + +-+-+-----------------------------------+--------+------------------------+ + +Kernel response contents: + + +---------------------------------------+--------+-----------------------+ + | ``ETHTOOL_A_STRSET_HEADER`` | nested | reply header | + +---------------------------------------+--------+-----------------------+ + | ``ETHTOOL_A_STRSET_STRINGSETS`` | nested | array of string sets | + +-+-------------------------------------+--------+-----------------------+ + | | ``ETHTOOL_A_STRINGSETS_STRINGSET+`` | nested | one string set | + +-+-+-----------------------------------+--------+-----------------------+ + | | | ``ETHTOOL_A_STRINGSET_ID`` | u32 | set id | + +-+-+-----------------------------------+--------+-----------------------+ + | | | ``ETHTOOL_A_STRINGSET_COUNT`` | u32 | number of strings | + +-+-+-----------------------------------+--------+-----------------------+ + | | | ``ETHTOOL_A_STRINGSET_STRINGS`` | nested | array of strings | + +-+-+-+---------------------------------+--------+-----------------------+ + | | | | ``ETHTOOL_A_STRINGS_STRING+`` | nested | one string | + +-+-+-+-+-------------------------------+--------+-----------------------+ + | | | | | ``ETHTOOL_A_STRING_INDEX`` | u32 | string index | + +-+-+-+-+-------------------------------+--------+-----------------------+ + | | | | | ``ETHTOOL_A_STRING_VALUE`` | string | string value | + +-+-+-+-+-------------------------------+--------+-----------------------+ + | ``ETHTOOL_A_STRSET_COUNTS_ONLY`` | flag | return only counts | + +---------------------------------------+--------+-----------------------+ + +Device identification in request header is optional. Depending on its presence +a and ``NLM_F_DUMP`` flag, there are three type of ``STRSET_GET`` requests: + + - no ``NLM_F_DUMP,`` no device: get "global" stringsets + - no ``NLM_F_DUMP``, with device: get string sets related to the device + - ``NLM_F_DUMP``, no device: get device related string sets for all devices + +If there is no ``ETHTOOL_A_STRSET_STRINGSETS`` array, all string sets of +requested type are returned, otherwise only those specified in the request. +Flag ``ETHTOOL_A_STRSET_COUNTS_ONLY`` tells kernel to only return string +counts of the sets, not the actual strings. + + Request translation =================== @@ -242,7 +313,7 @@ have their netlink replacement yet. ``ETHTOOL_GSG`` n/a ``ETHTOOL_SSG`` n/a ``ETHTOOL_TEST`` n/a - ``ETHTOOL_GSTRINGS`` n/a + ``ETHTOOL_GSTRINGS`` ``ETHTOOL_MSG_STRSET_GET`` ``ETHTOOL_PHYS_ID`` n/a ``ETHTOOL_GSTATS`` n/a ``ETHTOOL_GTSO`` n/a @@ -270,7 +341,7 @@ have their netlink replacement yet. ``ETHTOOL_RESET`` n/a ``ETHTOOL_SRXNTUPLE`` n/a ``ETHTOOL_GRXNTUPLE`` n/a - ``ETHTOOL_GSSET_INFO`` n/a + ``ETHTOOL_GSSET_INFO`` ``ETHTOOL_MSG_STRSET_GET`` ``ETHTOOL_GRXFHINDIR`` n/a ``ETHTOOL_SRXFHINDIR`` n/a ``ETHTOOL_GFEATURES`` n/a diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h index f44155840b07..116bcbf09c74 100644 --- a/include/uapi/linux/ethtool.h +++ b/include/uapi/linux/ethtool.h @@ -606,6 +606,9 @@ enum ethtool_stringset { ETH_SS_PHY_STATS, ETH_SS_PHY_TUNABLES, ETH_SS_LINK_MODES, + + /* add new constants above here */ + ETH_SS_COUNT }; /** diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index d530ccb6f7e6..cabef1fec42a 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -14,6 +14,7 @@ /* message types - userspace to kernel */ enum { ETHTOOL_MSG_USER_NONE, + ETHTOOL_MSG_STRSET_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -23,6 +24,7 @@ enum { /* message types - kernel to userspace */ enum { ETHTOOL_MSG_KERNEL_NONE, + ETHTOOL_MSG_STRSET_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -85,6 +87,60 @@ enum { ETHTOOL_A_BITSET_MAX = __ETHTOOL_A_BITSET_CNT - 1 }; +/* string sets */ + +enum { + ETHTOOL_A_STRING_UNSPEC, + ETHTOOL_A_STRING_INDEX, /* u32 */ + ETHTOOL_A_STRING_VALUE, /* string */ + + /* add new constants above here */ + __ETHTOOL_A_STRING_CNT, + ETHTOOL_A_STRING_MAX = __ETHTOOL_A_STRING_CNT - 1 +}; + +enum { + ETHTOOL_A_STRINGS_UNSPEC, + ETHTOOL_A_STRINGS_STRING, /* nest - _A_STRINGS_* */ + + /* add new constants above here */ + __ETHTOOL_A_STRINGS_CNT, + ETHTOOL_A_STRINGS_MAX = __ETHTOOL_A_STRINGS_CNT - 1 +}; + +enum { + ETHTOOL_A_STRINGSET_UNSPEC, + ETHTOOL_A_STRINGSET_ID, /* u32 */ + ETHTOOL_A_STRINGSET_COUNT, /* u32 */ + ETHTOOL_A_STRINGSET_STRINGS, /* nest - _A_STRINGS_* */ + + /* add new constants above here */ + __ETHTOOL_A_STRINGSET_CNT, + ETHTOOL_A_STRINGSET_MAX = __ETHTOOL_A_STRINGSET_CNT - 1 +}; + +enum { + ETHTOOL_A_STRINGSETS_UNSPEC, + ETHTOOL_A_STRINGSETS_STRINGSET, /* nest - _A_STRINGSET_* */ + + /* add new constants above here */ + __ETHTOOL_A_STRINGSETS_CNT, + ETHTOOL_A_STRINGSETS_MAX = __ETHTOOL_A_STRINGSETS_CNT - 1 +}; + +/* STRSET */ + +enum { + ETHTOOL_A_STRSET_UNSPEC, + ETHTOOL_A_STRSET_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_STRSET_STRINGSETS, /* nest - _A_STRINGSETS_* */ + ETHTOOL_A_STRSET_COUNTS_ONLY, /* flag */ + + /* add new constants above here */ + __ETHTOOL_A_STRSET_CNT, + ETHTOOL_A_STRSET_MAX = __ETHTOOL_A_STRSET_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index a7e6c2c85db9..efcc42c34d62 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -4,4 +4,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o -ethtool_nl-y := netlink.o bitset.o +ethtool_nl-y := netlink.o bitset.o strset.o diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index ae63e8423072..7dc082bde670 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -194,6 +194,7 @@ struct ethnl_dump_ctx { static const struct ethnl_request_ops * ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { + [ETHTOOL_MSG_STRSET_GET] = ðnl_strset_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -518,6 +519,13 @@ EXPORT_SYMBOL(ethtool_notify); /* genetlink setup */ static const struct genl_ops ethtool_genl_ops[] = { + { + .cmd = ETHTOOL_MSG_STRSET_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 6ec0dd06277f..44e9f63aefb7 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -326,4 +326,8 @@ struct ethnl_request_ops { void (*cleanup_data)(struct ethnl_reply_data *reply_data); }; +/* request handlers */ + +extern const struct ethnl_request_ops ethnl_strset_request_ops; + #endif /* _NET_ETHTOOL_NETLINK_H */ diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c new file mode 100644 index 000000000000..9f2243329015 --- /dev/null +++ b/net/ethtool/strset.c @@ -0,0 +1,425 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include "netlink.h" +#include "common.h" + +struct strset_info { + bool per_dev; + bool free_strings; + unsigned int count; + const char (*strings)[ETH_GSTRING_LEN]; +}; + +static const struct strset_info info_template[] = { + [ETH_SS_TEST] = { + .per_dev = true, + }, + [ETH_SS_STATS] = { + .per_dev = true, + }, + [ETH_SS_PRIV_FLAGS] = { + .per_dev = true, + }, + [ETH_SS_FEATURES] = { + .per_dev = false, + .count = ARRAY_SIZE(netdev_features_strings), + .strings = netdev_features_strings, + }, + [ETH_SS_RSS_HASH_FUNCS] = { + .per_dev = false, + .count = ARRAY_SIZE(rss_hash_func_strings), + .strings = rss_hash_func_strings, + }, + [ETH_SS_TUNABLES] = { + .per_dev = false, + .count = ARRAY_SIZE(tunable_strings), + .strings = tunable_strings, + }, + [ETH_SS_PHY_STATS] = { + .per_dev = true, + }, + [ETH_SS_PHY_TUNABLES] = { + .per_dev = false, + .count = ARRAY_SIZE(phy_tunable_strings), + .strings = phy_tunable_strings, + }, + [ETH_SS_LINK_MODES] = { + .per_dev = false, + .count = __ETHTOOL_LINK_MODE_MASK_NBITS, + .strings = link_mode_names, + }, +}; + +struct strset_req_info { + struct ethnl_req_info base; + u32 req_ids; + bool counts_only; +}; + +#define STRSET_REQINFO(__req_base) \ + container_of(__req_base, struct strset_req_info, base) + +struct strset_reply_data { + struct ethnl_reply_data base; + struct strset_info sets[ETH_SS_COUNT]; +}; + +#define STRSET_REPDATA(__reply_base) \ + container_of(__reply_base, struct strset_reply_data, base) + +static const struct nla_policy strset_get_policy[ETHTOOL_A_STRSET_MAX + 1] = { + [ETHTOOL_A_STRSET_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_STRSET_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_STRSET_STRINGSETS] = { .type = NLA_NESTED }, +}; + +static const struct nla_policy +get_stringset_policy[ETHTOOL_A_STRINGSET_MAX + 1] = { + [ETHTOOL_A_STRINGSET_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_STRINGSET_ID] = { .type = NLA_U32 }, + [ETHTOOL_A_STRINGSET_COUNT] = { .type = NLA_REJECT }, + [ETHTOOL_A_STRINGSET_STRINGS] = { .type = NLA_REJECT }, +}; + +/** + * strset_include() - test if a string set should be included in reply + * @data: pointer to request data structure + * @id: id of string set to check (ETH_SS_* constants) + */ +static bool strset_include(const struct strset_req_info *info, + const struct strset_reply_data *data, u32 id) +{ + bool per_dev; + + BUILD_BUG_ON(ETH_SS_COUNT >= BITS_PER_BYTE * sizeof(info->req_ids)); + + if (info->req_ids) + return info->req_ids & (1U << id); + per_dev = data->sets[id].per_dev; + if (!per_dev && !data->sets[id].strings) + return false; + + return data->base.dev ? per_dev : !per_dev; +} + +static int strset_get_id(const struct nlattr *nest, u32 *val, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[ETHTOOL_A_STRINGSET_MAX + 1]; + int ret; + + ret = nla_parse_nested(tb, ETHTOOL_A_STRINGSET_MAX, nest, + get_stringset_policy, extack); + if (ret < 0) + return ret; + if (!tb[ETHTOOL_A_STRINGSET_ID]) + return -EINVAL; + + *val = nla_get_u32(tb[ETHTOOL_A_STRINGSET_ID]); + return 0; +} + +static const struct nla_policy +strset_stringsets_policy[ETHTOOL_A_STRINGSETS_MAX + 1] = { + [ETHTOOL_A_STRINGSETS_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_STRINGSETS_STRINGSET] = { .type = NLA_NESTED }, +}; + +static int strset_parse_request(struct ethnl_req_info *req_base, + struct nlattr **tb, + struct netlink_ext_ack *extack) +{ + struct strset_req_info *req_info = STRSET_REQINFO(req_base); + struct nlattr *nest = tb[ETHTOOL_A_STRSET_STRINGSETS]; + struct nlattr *attr; + int rem, ret; + + if (!nest) + return 0; + ret = nla_validate_nested(nest, ETHTOOL_A_STRINGSETS_MAX, + strset_stringsets_policy, extack); + if (ret < 0) + return ret; + + req_info->counts_only = tb[ETHTOOL_A_STRSET_COUNTS_ONLY]; + nla_for_each_nested(attr, nest, rem) { + u32 id; + + if (WARN_ONCE(nla_type(attr) != ETHTOOL_A_STRINGSETS_STRINGSET, + "unexpected attrtype %u in ETHTOOL_A_STRSET_STRINGSETS\n", + nla_type(attr))) + return -EINVAL; + + ret = strset_get_id(attr, &id, extack); + if (ret < 0) + return ret; + if (ret >= ETH_SS_COUNT) { + NL_SET_ERR_MSG_ATTR(extack, attr, + "unknown string set id"); + return -EOPNOTSUPP; + } + + req_info->req_ids |= (1U << id); + } + + return 0; +} + +static void strset_cleanup_data(struct ethnl_reply_data *reply_base) +{ + struct strset_reply_data *data = STRSET_REPDATA(reply_base); + unsigned int i; + + for (i = 0; i < ETH_SS_COUNT; i++) + if (data->sets[i].free_strings) { + kfree(data->sets[i].strings); + data->sets[i].strings = NULL; + data->sets[i].free_strings = false; + } +} + +static int strset_prepare_set(struct strset_info *info, struct net_device *dev, + unsigned int id, bool counts_only) +{ + const struct ethtool_ops *ops = dev->ethtool_ops; + void *strings; + int count, ret; + + if (id == ETH_SS_PHY_STATS && dev->phydev && + !ops->get_ethtool_phy_stats) + ret = phy_ethtool_get_sset_count(dev->phydev); + else if (ops->get_sset_count && ops->get_strings) + ret = ops->get_sset_count(dev, id); + else + ret = -EOPNOTSUPP; + if (ret <= 0) { + info->count = 0; + return 0; + } + + count = ret; + if (!counts_only) { + strings = kcalloc(count, ETH_GSTRING_LEN, GFP_KERNEL); + if (!strings) + return -ENOMEM; + if (id == ETH_SS_PHY_STATS && dev->phydev && + !ops->get_ethtool_phy_stats) + phy_ethtool_get_strings(dev->phydev, strings); + else + ops->get_strings(dev, id, strings); + info->strings = strings; + info->free_strings = true; + } + info->count = count; + + return 0; +} + +static int strset_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + const struct strset_req_info *req_info = STRSET_REQINFO(req_base); + struct strset_reply_data *data = STRSET_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + unsigned int i; + int ret; + + BUILD_BUG_ON(ARRAY_SIZE(info_template) != ETH_SS_COUNT); + memcpy(&data->sets, &info_template, sizeof(data->sets)); + + if (!dev) { + for (i = 0; i < ETH_SS_COUNT; i++) { + if ((req_info->req_ids & (1U << i)) && + data->sets[i].per_dev) { + if (info) + GENL_SET_ERR_MSG(info, "requested per device strings without dev"); + return -EINVAL; + } + } + } + + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto err_strset; + for (i = 0; i < ETH_SS_COUNT; i++) { + if (!strset_include(req_info, data, i) || + !data->sets[i].per_dev) + continue; + + ret = strset_prepare_set(&data->sets[i], dev, i, + req_info->counts_only); + if (ret < 0) + goto err_ops; + } + ethnl_ops_complete(dev); + + return 0; +err_ops: + ethnl_ops_complete(dev); +err_strset: + strset_cleanup_data(reply_base); + return ret; +} + +/* calculate size of ETHTOOL_A_STRSET_STRINGSET nest for one string set */ +static int strset_set_size(const struct strset_info *info, bool counts_only) +{ + unsigned int len = 0; + unsigned int i; + + if (info->count == 0) + return 0; + if (counts_only) + return nla_total_size(2 * nla_total_size(sizeof(u32))); + + for (i = 0; i < info->count; i++) { + const char *str = info->strings[i]; + + /* ETHTOOL_A_STRING_INDEX, ETHTOOL_A_STRING_VALUE, nest */ + len += nla_total_size(nla_total_size(sizeof(u32)) + + ethnl_strz_size(str)); + } + /* ETHTOOL_A_STRINGSET_ID, ETHTOOL_A_STRINGSET_COUNT */ + len = 2 * nla_total_size(sizeof(u32)) + nla_total_size(len); + + return nla_total_size(len); +} + +static int strset_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct strset_req_info *req_info = STRSET_REQINFO(req_base); + const struct strset_reply_data *data = STRSET_REPDATA(reply_base); + unsigned int i; + int len = 0; + int ret; + + len += ethnl_reply_header_size(); + for (i = 0; i < ETH_SS_COUNT; i++) { + const struct strset_info *set_info = &data->sets[i]; + + if (!strset_include(req_info, data, i)) + continue; + + ret = strset_set_size(set_info, req_info->counts_only); + if (ret < 0) + return ret; + len += ret; + } + + return len; +} + +/* fill one string into reply */ +static int strset_fill_string(struct sk_buff *skb, + const struct strset_info *set_info, u32 idx) +{ + struct nlattr *string_attr; + const char *value; + + value = set_info->strings[idx]; + + string_attr = nla_nest_start(skb, ETHTOOL_A_STRINGS_STRING); + if (!string_attr) + return -EMSGSIZE; + if (nla_put_u32(skb, ETHTOOL_A_STRING_INDEX, idx) || + ethnl_put_strz(skb, ETHTOOL_A_STRING_VALUE, value)) + goto nla_put_failure; + nla_nest_end(skb, string_attr); + + return 0; +nla_put_failure: + nla_nest_cancel(skb, string_attr); + return -EMSGSIZE; +} + +/* fill one string set into reply */ +static int strset_fill_set(struct sk_buff *skb, + const struct strset_info *set_info, u32 id, + bool counts_only) +{ + struct nlattr *stringset_attr; + struct nlattr *strings_attr; + unsigned int i; + + if (!set_info->per_dev && !set_info->strings) + return -EOPNOTSUPP; + if (set_info->count == 0) + return 0; + stringset_attr = nla_nest_start(skb, ETHTOOL_A_STRINGSETS_STRINGSET); + if (!stringset_attr) + return -EMSGSIZE; + + if (nla_put_u32(skb, ETHTOOL_A_STRINGSET_ID, id) || + nla_put_u32(skb, ETHTOOL_A_STRINGSET_COUNT, set_info->count)) + goto nla_put_failure; + + if (!counts_only) { + strings_attr = nla_nest_start(skb, ETHTOOL_A_STRINGSET_STRINGS); + if (!strings_attr) + goto nla_put_failure; + for (i = 0; i < set_info->count; i++) { + if (strset_fill_string(skb, set_info, i) < 0) + goto nla_put_failure; + } + nla_nest_end(skb, strings_attr); + } + + nla_nest_end(skb, stringset_attr); + return 0; + +nla_put_failure: + nla_nest_cancel(skb, stringset_attr); + return -EMSGSIZE; +} + +static int strset_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct strset_req_info *req_info = STRSET_REQINFO(req_base); + const struct strset_reply_data *data = STRSET_REPDATA(reply_base); + struct nlattr *nest; + unsigned int i; + int ret; + + nest = nla_nest_start(skb, ETHTOOL_A_STRSET_STRINGSETS); + if (!nest) + return -EMSGSIZE; + + for (i = 0; i < ETH_SS_COUNT; i++) { + if (strset_include(req_info, data, i)) { + ret = strset_fill_set(skb, &data->sets[i], i, + req_info->counts_only); + if (ret < 0) + goto nla_put_failure; + } + } + + nla_nest_end(skb, nest); + return 0; + +nla_put_failure: + nla_nest_cancel(skb, nest); + return ret; +} + +const struct ethnl_request_ops ethnl_strset_request_ops = { + .request_cmd = ETHTOOL_MSG_STRSET_GET, + .reply_cmd = ETHTOOL_MSG_STRSET_GET_REPLY, + .hdr_attr = ETHTOOL_A_STRSET_HEADER, + .max_attr = ETHTOOL_A_STRSET_MAX, + .req_info_size = sizeof(struct strset_req_info), + .reply_data_size = sizeof(struct strset_reply_data), + .request_policy = strset_get_policy, + .allow_nodev_do = true, + + .parse_request = strset_parse_request, + .prepare_data = strset_prepare_data, + .reply_size = strset_reply_size, + .fill_reply = strset_fill_reply, + .cleanup_data = strset_cleanup_data, +}; -- cgit v1.2.3-71-gd317 From 459e0b81b37043545d90629fdfb243444151e77d Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:48 +0100 Subject: ethtool: provide link settings with LINKINFO_GET request Implement LINKINFO_GET netlink request to get basic link settings provided by ETHTOOL_GLINKSETTINGS and ETHTOOL_GSET ioctl commands. This request provides settings not directly related to autonegotiation and link mode selection: physical port, phy MDIO address, MDI(-X) status, MDI(-X) control and transceiver. LINKINFO_GET request can be used with NLM_F_DUMP (without device identification) to request the information for all devices in current network namespace providing the data. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 37 ++++++++++- include/uapi/linux/ethtool_netlink.h | 18 ++++++ net/ethtool/Makefile | 2 +- net/ethtool/common.c | 48 ++++++++++++++ net/ethtool/common.h | 4 ++ net/ethtool/ioctl.c | 48 -------------- net/ethtool/linkinfo.c | 94 ++++++++++++++++++++++++++++ net/ethtool/netlink.c | 8 +++ net/ethtool/netlink.h | 1 + 9 files changed, 209 insertions(+), 51 deletions(-) create mode 100644 net/ethtool/linkinfo.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 3912cb0eb9c6..60684786a92c 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -180,12 +180,14 @@ Userspace to kernel: ===================================== ================================ ``ETHTOOL_MSG_STRSET_GET`` get string set + ``ETHTOOL_MSG_LINKINFO_GET`` get link settings ===================================== ================================ Kernel to userspace: ===================================== ================================ ``ETHTOOL_MSG_STRSET_GET_REPLY`` string set contents + ``ETHTOOL_MSG_LINKINFO_GET_REPLY`` link settings ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device @@ -278,6 +280,37 @@ Flag ``ETHTOOL_A_STRSET_COUNTS_ONLY`` tells kernel to only return string counts of the sets, not the actual strings. +LINKINFO_GET +============ + +Requests link settings as provided by ``ETHTOOL_GLINKSETTINGS`` except for +link modes and autonegotiation related information. The request does not use +any attributes. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKINFO_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKINFO_HEADER`` nested reply header + ``ETHTOOL_A_LINKINFO_PORT`` u8 physical port + ``ETHTOOL_A_LINKINFO_PHYADDR`` u8 phy MDIO address + ``ETHTOOL_A_LINKINFO_TP_MDIX`` u8 MDI(-X) status + ``ETHTOOL_A_LINKINFO_TP_MDIX_CTRL`` u8 MDI(-X) control + ``ETHTOOL_A_LINKINFO_TRANSCEIVER`` u8 transceiver + ==================================== ====== ========================== + +Attributes and their values have the same meaning as matching members of the +corresponding ioctl structures. + +``LINKINFO_GET`` allows dump requests (kernel returns reply message for all +devices supporting the request). + + Request translation =================== @@ -288,7 +321,7 @@ have their netlink replacement yet. =================================== ===================================== ioctl command netlink command =================================== ===================================== - ``ETHTOOL_GSET`` n/a + ``ETHTOOL_GSET`` ``ETHTOOL_MSG_LINKINFO_GET`` ``ETHTOOL_SSET`` n/a ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a @@ -362,7 +395,7 @@ have their netlink replacement yet. ``ETHTOOL_STUNABLE`` n/a ``ETHTOOL_GPHYSTATS`` n/a ``ETHTOOL_PERQUEUE`` n/a - ``ETHTOOL_GLINKSETTINGS`` n/a + ``ETHTOOL_GLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_GET`` ``ETHTOOL_SLINKSETTINGS`` n/a ``ETHTOOL_PHY_GTUNABLE`` n/a ``ETHTOOL_PHY_STUNABLE`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index cabef1fec42a..1966532993e5 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -15,6 +15,7 @@ enum { ETHTOOL_MSG_USER_NONE, ETHTOOL_MSG_STRSET_GET, + ETHTOOL_MSG_LINKINFO_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -25,6 +26,7 @@ enum { enum { ETHTOOL_MSG_KERNEL_NONE, ETHTOOL_MSG_STRSET_GET_REPLY, + ETHTOOL_MSG_LINKINFO_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -141,6 +143,22 @@ enum { ETHTOOL_A_STRSET_MAX = __ETHTOOL_A_STRSET_CNT - 1 }; +/* LINKINFO */ + +enum { + ETHTOOL_A_LINKINFO_UNSPEC, + ETHTOOL_A_LINKINFO_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_LINKINFO_PORT, /* u8 */ + ETHTOOL_A_LINKINFO_PHYADDR, /* u8 */ + ETHTOOL_A_LINKINFO_TP_MDIX, /* u8 */ + ETHTOOL_A_LINKINFO_TP_MDIX_CTRL, /* u8 */ + ETHTOOL_A_LINKINFO_TRANSCEIVER, /* u8 */ + + /* add new constants above here */ + __ETHTOOL_A_LINKINFO_CNT, + ETHTOOL_A_LINKINFO_MAX = __ETHTOOL_A_LINKINFO_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index efcc42c34d62..765736ec52c0 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -4,4 +4,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o -ethtool_nl-y := netlink.o bitset.o strset.o +ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 0a8728565356..1d4a0aeff2cb 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -169,3 +169,51 @@ const char link_mode_names[][ETH_GSTRING_LEN] = { __DEFINE_LINK_MODE_NAME(400000, CR8, Full), }; static_assert(ARRAY_SIZE(link_mode_names) == __ETHTOOL_LINK_MODE_MASK_NBITS); + +/* return false if legacy contained non-0 deprecated fields + * maxtxpkt/maxrxpkt. rest of ksettings always updated + */ +bool +convert_legacy_settings_to_link_ksettings( + struct ethtool_link_ksettings *link_ksettings, + const struct ethtool_cmd *legacy_settings) +{ + bool retval = true; + + memset(link_ksettings, 0, sizeof(*link_ksettings)); + + /* This is used to tell users that driver is still using these + * deprecated legacy fields, and they should not use + * %ETHTOOL_GLINKSETTINGS/%ETHTOOL_SLINKSETTINGS + */ + if (legacy_settings->maxtxpkt || + legacy_settings->maxrxpkt) + retval = false; + + ethtool_convert_legacy_u32_to_link_mode( + link_ksettings->link_modes.supported, + legacy_settings->supported); + ethtool_convert_legacy_u32_to_link_mode( + link_ksettings->link_modes.advertising, + legacy_settings->advertising); + ethtool_convert_legacy_u32_to_link_mode( + link_ksettings->link_modes.lp_advertising, + legacy_settings->lp_advertising); + link_ksettings->base.speed + = ethtool_cmd_speed(legacy_settings); + link_ksettings->base.duplex + = legacy_settings->duplex; + link_ksettings->base.port + = legacy_settings->port; + link_ksettings->base.phy_address + = legacy_settings->phy_address; + link_ksettings->base.autoneg + = legacy_settings->autoneg; + link_ksettings->base.mdio_support + = legacy_settings->mdio_support; + link_ksettings->base.eth_tp_mdix + = legacy_settings->eth_tp_mdix; + link_ksettings->base.eth_tp_mdix_ctrl + = legacy_settings->eth_tp_mdix_ctrl; + return retval; +} diff --git a/net/ethtool/common.h b/net/ethtool/common.h index bbb788908cb1..c8a237402729 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -19,4 +19,8 @@ extern const char phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char link_mode_names[][ETH_GSTRING_LEN]; +bool convert_legacy_settings_to_link_ksettings( + struct ethtool_link_ksettings *link_ksettings, + const struct ethtool_cmd *legacy_settings); + #endif /* _ETHTOOL_COMMON_H */ diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 88f7cddf5a6f..8a0a13b478e0 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -358,54 +358,6 @@ bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32, } EXPORT_SYMBOL(ethtool_convert_link_mode_to_legacy_u32); -/* return false if legacy contained non-0 deprecated fields - * maxtxpkt/maxrxpkt. rest of ksettings always updated - */ -static bool -convert_legacy_settings_to_link_ksettings( - struct ethtool_link_ksettings *link_ksettings, - const struct ethtool_cmd *legacy_settings) -{ - bool retval = true; - - memset(link_ksettings, 0, sizeof(*link_ksettings)); - - /* This is used to tell users that driver is still using these - * deprecated legacy fields, and they should not use - * %ETHTOOL_GLINKSETTINGS/%ETHTOOL_SLINKSETTINGS - */ - if (legacy_settings->maxtxpkt || - legacy_settings->maxrxpkt) - retval = false; - - ethtool_convert_legacy_u32_to_link_mode( - link_ksettings->link_modes.supported, - legacy_settings->supported); - ethtool_convert_legacy_u32_to_link_mode( - link_ksettings->link_modes.advertising, - legacy_settings->advertising); - ethtool_convert_legacy_u32_to_link_mode( - link_ksettings->link_modes.lp_advertising, - legacy_settings->lp_advertising); - link_ksettings->base.speed - = ethtool_cmd_speed(legacy_settings); - link_ksettings->base.duplex - = legacy_settings->duplex; - link_ksettings->base.port - = legacy_settings->port; - link_ksettings->base.phy_address - = legacy_settings->phy_address; - link_ksettings->base.autoneg - = legacy_settings->autoneg; - link_ksettings->base.mdio_support - = legacy_settings->mdio_support; - link_ksettings->base.eth_tp_mdix - = legacy_settings->eth_tp_mdix; - link_ksettings->base.eth_tp_mdix_ctrl - = legacy_settings->eth_tp_mdix_ctrl; - return retval; -} - /* return false if ksettings link modes had higher bits * set. legacy_settings always updated (best effort) */ diff --git a/net/ethtool/linkinfo.c b/net/ethtool/linkinfo.c new file mode 100644 index 000000000000..f52a31af6fff --- /dev/null +++ b/net/ethtool/linkinfo.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" + +struct linkinfo_req_info { + struct ethnl_req_info base; +}; + +struct linkinfo_reply_data { + struct ethnl_reply_data base; + struct ethtool_link_ksettings ksettings; + struct ethtool_link_settings *lsettings; +}; + +#define LINKINFO_REPDATA(__reply_base) \ + container_of(__reply_base, struct linkinfo_reply_data, base) + +static const struct nla_policy +linkinfo_get_policy[ETHTOOL_A_LINKINFO_MAX + 1] = { + [ETHTOOL_A_LINKINFO_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKINFO_PORT] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_PHYADDR] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_TP_MDIX] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_TP_MDIX_CTRL] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_TRANSCEIVER] = { .type = NLA_REJECT }, +}; + +static int linkinfo_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct linkinfo_reply_data *data = LINKINFO_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + data->lsettings = &data->ksettings.base; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + ret = __ethtool_get_link_ksettings(dev, &data->ksettings); + if (ret < 0 && info) + GENL_SET_ERR_MSG(info, "failed to retrieve link settings"); + ethnl_ops_complete(dev); + + return ret; +} + +static int linkinfo_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + return nla_total_size(sizeof(u8)) /* LINKINFO_PORT */ + + nla_total_size(sizeof(u8)) /* LINKINFO_PHYADDR */ + + nla_total_size(sizeof(u8)) /* LINKINFO_TP_MDIX */ + + nla_total_size(sizeof(u8)) /* LINKINFO_TP_MDIX_CTRL */ + + nla_total_size(sizeof(u8)) /* LINKINFO_TRANSCEIVER */ + + 0; +} + +static int linkinfo_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct linkinfo_reply_data *data = LINKINFO_REPDATA(reply_base); + + if (nla_put_u8(skb, ETHTOOL_A_LINKINFO_PORT, data->lsettings->port) || + nla_put_u8(skb, ETHTOOL_A_LINKINFO_PHYADDR, + data->lsettings->phy_address) || + nla_put_u8(skb, ETHTOOL_A_LINKINFO_TP_MDIX, + data->lsettings->eth_tp_mdix) || + nla_put_u8(skb, ETHTOOL_A_LINKINFO_TP_MDIX_CTRL, + data->lsettings->eth_tp_mdix_ctrl) || + nla_put_u8(skb, ETHTOOL_A_LINKINFO_TRANSCEIVER, + data->lsettings->transceiver)) + return -EMSGSIZE; + + return 0; +} + +const struct ethnl_request_ops ethnl_linkinfo_request_ops = { + .request_cmd = ETHTOOL_MSG_LINKINFO_GET, + .reply_cmd = ETHTOOL_MSG_LINKINFO_GET_REPLY, + .hdr_attr = ETHTOOL_A_LINKINFO_HEADER, + .max_attr = ETHTOOL_A_LINKINFO_MAX, + .req_info_size = sizeof(struct linkinfo_req_info), + .reply_data_size = sizeof(struct linkinfo_reply_data), + .request_policy = linkinfo_get_policy, + + .prepare_data = linkinfo_prepare_data, + .reply_size = linkinfo_reply_size, + .fill_reply = linkinfo_fill_reply, +}; diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 7dc082bde670..ce86d97c922e 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -195,6 +195,7 @@ struct ethnl_dump_ctx { static const struct ethnl_request_ops * ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_STRSET_GET] = ðnl_strset_request_ops, + [ETHTOOL_MSG_LINKINFO_GET] = ðnl_linkinfo_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -526,6 +527,13 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_LINKINFO_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 44e9f63aefb7..9fc8f94d8dce 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -329,5 +329,6 @@ struct ethnl_request_ops { /* request handlers */ extern const struct ethnl_request_ops ethnl_strset_request_ops; +extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; #endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From a53f3d41e4d3df46aba254ba51e7fbe45470fece Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:55:53 +0100 Subject: ethtool: set link settings with LINKINFO_SET request Implement LINKINFO_SET netlink request to set link settings queried by LINKINFO_GET message. Only physical port, phy MDIO address and MDI(-X) control can be set, attempt to modify MDI(-X) status and transceiver is rejected. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 24 +++++++++- include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/linkinfo.c | 71 ++++++++++++++++++++++++++++ net/ethtool/netlink.c | 5 ++ net/ethtool/netlink.h | 2 + 5 files changed, 101 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 60684786a92c..39c4aba324c3 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -181,6 +181,7 @@ Userspace to kernel: ===================================== ================================ ``ETHTOOL_MSG_STRSET_GET`` get string set ``ETHTOOL_MSG_LINKINFO_GET`` get link settings + ``ETHTOOL_MSG_LINKINFO_SET`` set link settings ===================================== ================================ Kernel to userspace: @@ -311,6 +312,25 @@ corresponding ioctl structures. devices supporting the request). +LINKINFO_SET +============ + +``LINKINFO_SET`` request allows setting some of the attributes reported by +``LINKINFO_GET``. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKINFO_HEADER`` nested request header + ``ETHTOOL_A_LINKINFO_PORT`` u8 physical port + ``ETHTOOL_A_LINKINFO_PHYADDR`` u8 phy MDIO address + ``ETHTOOL_A_LINKINFO_TP_MDIX_CTRL`` u8 MDI(-X) control + ==================================== ====== ========================== + +MDI(-X) status and transceiver cannot be set, request with the corresponding +attributes is rejected. + + Request translation =================== @@ -322,7 +342,7 @@ have their netlink replacement yet. ioctl command netlink command =================================== ===================================== ``ETHTOOL_GSET`` ``ETHTOOL_MSG_LINKINFO_GET`` - ``ETHTOOL_SSET`` n/a + ``ETHTOOL_SSET`` ``ETHTOOL_MSG_LINKINFO_SET`` ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a ``ETHTOOL_GWOL`` n/a @@ -396,7 +416,7 @@ have their netlink replacement yet. ``ETHTOOL_GPHYSTATS`` n/a ``ETHTOOL_PERQUEUE`` n/a ``ETHTOOL_GLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_GET`` - ``ETHTOOL_SLINKSETTINGS`` n/a + ``ETHTOOL_SLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_SET`` ``ETHTOOL_PHY_GTUNABLE`` n/a ``ETHTOOL_PHY_STUNABLE`` n/a ``ETHTOOL_GFECPARAM`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 1966532993e5..5b7806a5bef8 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -16,6 +16,7 @@ enum { ETHTOOL_MSG_USER_NONE, ETHTOOL_MSG_STRSET_GET, ETHTOOL_MSG_LINKINFO_GET, + ETHTOOL_MSG_LINKINFO_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, diff --git a/net/ethtool/linkinfo.c b/net/ethtool/linkinfo.c index f52a31af6fff..8a5f68f92425 100644 --- a/net/ethtool/linkinfo.c +++ b/net/ethtool/linkinfo.c @@ -92,3 +92,74 @@ const struct ethnl_request_ops ethnl_linkinfo_request_ops = { .reply_size = linkinfo_reply_size, .fill_reply = linkinfo_fill_reply, }; + +/* LINKINFO_SET */ + +static const struct nla_policy +linkinfo_set_policy[ETHTOOL_A_LINKINFO_MAX + 1] = { + [ETHTOOL_A_LINKINFO_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKINFO_PORT] = { .type = NLA_U8 }, + [ETHTOOL_A_LINKINFO_PHYADDR] = { .type = NLA_U8 }, + [ETHTOOL_A_LINKINFO_TP_MDIX] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKINFO_TP_MDIX_CTRL] = { .type = NLA_U8 }, + [ETHTOOL_A_LINKINFO_TRANSCEIVER] = { .type = NLA_REJECT }, +}; + +int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info) +{ + struct nlattr *tb[ETHTOOL_A_LINKINFO_MAX + 1]; + struct ethtool_link_ksettings ksettings = {}; + struct ethtool_link_settings *lsettings; + struct ethnl_req_info req_info = {}; + struct net_device *dev; + bool mod = false; + int ret; + + ret = nlmsg_parse(info->nlhdr, GENL_HDRLEN, tb, + ETHTOOL_A_LINKINFO_MAX, linkinfo_set_policy, + info->extack); + if (ret < 0) + return ret; + ret = ethnl_parse_header(&req_info, tb[ETHTOOL_A_LINKINFO_HEADER], + genl_info_net(info), info->extack, true); + if (ret < 0) + return ret; + dev = req_info.dev; + if (!dev->ethtool_ops->get_link_ksettings || + !dev->ethtool_ops->set_link_ksettings) + return -EOPNOTSUPP; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + ret = __ethtool_get_link_ksettings(dev, &ksettings); + if (ret < 0) { + if (info) + GENL_SET_ERR_MSG(info, "failed to retrieve link settings"); + goto out_ops; + } + lsettings = &ksettings.base; + + ethnl_update_u8(&lsettings->port, tb[ETHTOOL_A_LINKINFO_PORT], &mod); + ethnl_update_u8(&lsettings->phy_address, tb[ETHTOOL_A_LINKINFO_PHYADDR], + &mod); + ethnl_update_u8(&lsettings->eth_tp_mdix_ctrl, + tb[ETHTOOL_A_LINKINFO_TP_MDIX_CTRL], &mod); + ret = 0; + if (!mod) + goto out_ops; + + ret = dev->ethtool_ops->set_link_ksettings(dev, &ksettings); + if (ret < 0) + GENL_SET_ERR_MSG(info, "link settings update failed"); + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); + dev_put(dev); + return ret; +} diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index ce86d97c922e..7867425956f6 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -534,6 +534,11 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_LINKINFO_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_linkinfo, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 9fc8f94d8dce..bbe5fe60a023 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -331,4 +331,6 @@ struct ethnl_request_ops { extern const struct ethnl_request_ops ethnl_strset_request_ops; extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; +int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); + #endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From 73286734c1b0d009fd317c2a74e173cb22233dad Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:56:03 +0100 Subject: ethtool: add LINKINFO_NTF notification Send ETHTOOL_MSG_LINKINFO_NTF notification message whenever device link settings are modified using ETHTOOL_MSG_LINKINFO_SET netlink message or ETHTOOL_SLINKSETTINGS or ETHTOOL_SSET ioctl commands. The notification message has the same format as reply to LINKINFO_GET request. ETHTOOL_MSG_LINKINFO_SET netlink request only triggers the notification if there is a change but the ioctl command handlers do not check if there is an actual change and trigger the notification whenever the commands are executed. As all work is done by ethnl_default_notify() handler and callback functions introduced to handle LINKINFO_GET requests, all that remains is adding entries for ETHTOOL_MSG_LINKINFO_NTF into ethnl_notify_handlers and ethnl_default_notify_ops lookup tables and calls to ethtool_notify() where needed. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 1 + include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/ioctl.c | 12 ++++++++++-- net/ethtool/linkinfo.c | 2 ++ net/ethtool/netlink.c | 2 ++ 5 files changed, 16 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 39c4aba324c3..34254482d295 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -189,6 +189,7 @@ Kernel to userspace: ===================================== ================================ ``ETHTOOL_MSG_STRSET_GET_REPLY`` string set contents ``ETHTOOL_MSG_LINKINFO_GET_REPLY`` link settings + ``ETHTOOL_MSG_LINKINFO_NTF`` link settings notification ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 5b7806a5bef8..d530fa30de36 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -28,6 +28,7 @@ enum { ETHTOOL_MSG_KERNEL_NONE, ETHTOOL_MSG_STRSET_GET_REPLY, ETHTOOL_MSG_LINKINFO_GET_REPLY, + ETHTOOL_MSG_LINKINFO_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 8a0a13b478e0..11a467294a33 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "common.h" @@ -571,7 +572,10 @@ static int ethtool_set_link_ksettings(struct net_device *dev, != link_ksettings.base.link_mode_masks_nwords) return -EINVAL; - return dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); + err = dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); + if (err >= 0) + ethtool_notify(dev, ETHTOOL_MSG_LINKINFO_NTF, NULL); + return err; } /* Query device for its ethtool_cmd settings. @@ -620,6 +624,7 @@ static int ethtool_set_settings(struct net_device *dev, void __user *useraddr) { struct ethtool_link_ksettings link_ksettings; struct ethtool_cmd cmd; + int ret; ASSERT_RTNL(); @@ -632,7 +637,10 @@ static int ethtool_set_settings(struct net_device *dev, void __user *useraddr) return -EINVAL; link_ksettings.base.link_mode_masks_nwords = __ETHTOOL_LINK_MODE_MASK_NU32; - return dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); + ret = dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); + if (ret >= 0) + ethtool_notify(dev, ETHTOOL_MSG_LINKINFO_NTF, NULL); + return ret; } static noinline_for_stack int ethtool_get_drvinfo(struct net_device *dev, diff --git a/net/ethtool/linkinfo.c b/net/ethtool/linkinfo.c index 8a5f68f92425..5d16cb4e8693 100644 --- a/net/ethtool/linkinfo.c +++ b/net/ethtool/linkinfo.c @@ -155,6 +155,8 @@ int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info) ret = dev->ethtool_ops->set_link_ksettings(dev, &ksettings); if (ret < 0) GENL_SET_ERR_MSG(info, "link settings update failed"); + else + ethtool_notify(dev, ETHTOOL_MSG_LINKINFO_NTF, NULL); out_ops: ethnl_ops_complete(dev); diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 057b67f8ba8c..942da4ebdfe9 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -509,6 +509,7 @@ static int ethnl_default_done(struct netlink_callback *cb) static const struct ethnl_request_ops * ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { + [ETHTOOL_MSG_LINKINFO_NTF] = ðnl_linkinfo_request_ops, }; /* default notification handler */ @@ -589,6 +590,7 @@ typedef void (*ethnl_notify_handler_t)(struct net_device *dev, unsigned int cmd, const void *data); static const ethnl_notify_handler_t ethnl_notify_handlers[] = { + [ETHTOOL_MSG_LINKINFO_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) -- cgit v1.2.3-71-gd317 From f625aa9be8c10f2e4dc677837e240730a25feda7 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:56:08 +0100 Subject: ethtool: provide link mode information with LINKMODES_GET request Implement LINKMODES_GET netlink request to get link modes related information provided by ETHTOOL_GLINKSETTINGS and ETHTOOL_GSET ioctl commands. This request provides supported, advertised and peer advertised link modes, autonegotiation flag, speed and duplex. LINKMODES_GET request can be used with NLM_F_DUMP (without device identification) to request the information for all devices in current network namespace providing the data. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 36 +++++++ include/linux/ethtool_netlink.h | 3 + include/uapi/linux/ethtool_netlink.h | 18 ++++ net/ethtool/Makefile | 2 +- net/ethtool/linkmodes.c | 140 +++++++++++++++++++++++++++ net/ethtool/netlink.c | 8 ++ net/ethtool/netlink.h | 1 + 7 files changed, 207 insertions(+), 1 deletion(-) create mode 100644 net/ethtool/linkmodes.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 34254482d295..04c55be0264c 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -182,6 +182,7 @@ Userspace to kernel: ``ETHTOOL_MSG_STRSET_GET`` get string set ``ETHTOOL_MSG_LINKINFO_GET`` get link settings ``ETHTOOL_MSG_LINKINFO_SET`` set link settings + ``ETHTOOL_MSG_LINKMODES_GET`` get link modes info ===================================== ================================ Kernel to userspace: @@ -190,6 +191,7 @@ Kernel to userspace: ``ETHTOOL_MSG_STRSET_GET_REPLY`` string set contents ``ETHTOOL_MSG_LINKINFO_GET_REPLY`` link settings ``ETHTOOL_MSG_LINKINFO_NTF`` link settings notification + ``ETHTOOL_MSG_LINKMODES_GET_REPLY`` link modes info ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device @@ -332,6 +334,38 @@ MDI(-X) status and transceiver cannot be set, request with the corresponding attributes is rejected. +LINKMODES_GET +============= + +Requests link modes (supported, advertised and peer advertised) and related +information (autonegotiation status, link speed and duplex) as provided by +``ETHTOOL_GLINKSETTINGS``. The request does not use any attributes. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKMODES_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKMODES_HEADER`` nested reply header + ``ETHTOOL_A_LINKMODES_AUTONEG`` u8 autonegotiation status + ``ETHTOOL_A_LINKMODES_OURS`` bitset advertised link modes + ``ETHTOOL_A_LINKMODES_PEER`` bitset partner link modes + ``ETHTOOL_A_LINKMODES_SPEED`` u32 link speed (Mb/s) + ``ETHTOOL_A_LINKMODES_DUPLEX`` u8 duplex mode + ==================================== ====== ========================== + +For ``ETHTOOL_A_LINKMODES_OURS``, value represents advertised modes and mask +represents supported modes. ``ETHTOOL_A_LINKMODES_PEER`` in the reply is a bit +list. + +``LINKMODES_GET`` allows dump requests (kernel returns reply messages for all +devices supporting the request). + + Request translation =================== @@ -343,6 +377,7 @@ have their netlink replacement yet. ioctl command netlink command =================================== ===================================== ``ETHTOOL_GSET`` ``ETHTOOL_MSG_LINKINFO_GET`` + ``ETHTOOL_MSG_LINKMODES_GET`` ``ETHTOOL_SSET`` ``ETHTOOL_MSG_LINKINFO_SET`` ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a @@ -417,6 +452,7 @@ have their netlink replacement yet. ``ETHTOOL_GPHYSTATS`` n/a ``ETHTOOL_PERQUEUE`` n/a ``ETHTOOL_GLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_GET`` + ``ETHTOOL_MSG_LINKMODES_GET`` ``ETHTOOL_SLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_SET`` ``ETHTOOL_PHY_GTUNABLE`` n/a ``ETHTOOL_PHY_STUNABLE`` n/a diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h index c98f6852c8eb..d01b77887f82 100644 --- a/include/linux/ethtool_netlink.h +++ b/include/linux/ethtool_netlink.h @@ -7,6 +7,9 @@ #include #include +#define __ETHTOOL_LINK_MODE_MASK_NWORDS \ + DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32) + enum ethtool_multicast_groups { ETHNL_MCGRP_MONITOR, }; diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index d530fa30de36..dc1cae052eee 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -17,6 +17,7 @@ enum { ETHTOOL_MSG_STRSET_GET, ETHTOOL_MSG_LINKINFO_GET, ETHTOOL_MSG_LINKINFO_SET, + ETHTOOL_MSG_LINKMODES_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -29,6 +30,7 @@ enum { ETHTOOL_MSG_STRSET_GET_REPLY, ETHTOOL_MSG_LINKINFO_GET_REPLY, ETHTOOL_MSG_LINKINFO_NTF, + ETHTOOL_MSG_LINKMODES_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -161,6 +163,22 @@ enum { ETHTOOL_A_LINKINFO_MAX = __ETHTOOL_A_LINKINFO_CNT - 1 }; +/* LINKMODES */ + +enum { + ETHTOOL_A_LINKMODES_UNSPEC, + ETHTOOL_A_LINKMODES_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_LINKMODES_AUTONEG, /* u8 */ + ETHTOOL_A_LINKMODES_OURS, /* bitset */ + ETHTOOL_A_LINKMODES_PEER, /* bitset */ + ETHTOOL_A_LINKMODES_SPEED, /* u32 */ + ETHTOOL_A_LINKMODES_DUPLEX, /* u8 */ + + /* add new constants above here */ + __ETHTOOL_A_LINKMODES_CNT, + ETHTOOL_A_LINKMODES_MAX = __ETHTOOL_A_LINKMODES_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index 765736ec52c0..8023da6672ce 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -4,4 +4,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o -ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o +ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o diff --git a/net/ethtool/linkmodes.c b/net/ethtool/linkmodes.c new file mode 100644 index 000000000000..81856fa1e632 --- /dev/null +++ b/net/ethtool/linkmodes.c @@ -0,0 +1,140 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" +#include "bitset.h" + +struct linkmodes_req_info { + struct ethnl_req_info base; +}; + +struct linkmodes_reply_data { + struct ethnl_reply_data base; + struct ethtool_link_ksettings ksettings; + struct ethtool_link_settings *lsettings; + bool peer_empty; +}; + +#define LINKMODES_REPDATA(__reply_base) \ + container_of(__reply_base, struct linkmodes_reply_data, base) + +static const struct nla_policy +linkmodes_get_policy[ETHTOOL_A_LINKMODES_MAX + 1] = { + [ETHTOOL_A_LINKMODES_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKMODES_AUTONEG] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_OURS] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_PEER] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_SPEED] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_DUPLEX] = { .type = NLA_REJECT }, +}; + +static int linkmodes_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct linkmodes_reply_data *data = LINKMODES_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + data->lsettings = &data->ksettings.base; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + + ret = __ethtool_get_link_ksettings(dev, &data->ksettings); + if (ret < 0 && info) { + GENL_SET_ERR_MSG(info, "failed to retrieve link settings"); + goto out; + } + + data->peer_empty = + bitmap_empty(data->ksettings.link_modes.lp_advertising, + __ETHTOOL_LINK_MODE_MASK_NBITS); + +out: + ethnl_ops_complete(dev); + return ret; +} + +static int linkmodes_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct linkmodes_reply_data *data = LINKMODES_REPDATA(reply_base); + const struct ethtool_link_ksettings *ksettings = &data->ksettings; + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + int len, ret; + + len = nla_total_size(sizeof(u8)) /* LINKMODES_AUTONEG */ + + nla_total_size(sizeof(u32)) /* LINKMODES_SPEED */ + + nla_total_size(sizeof(u8)) /* LINKMODES_DUPLEX */ + + 0; + ret = ethnl_bitset_size(ksettings->link_modes.advertising, + ksettings->link_modes.supported, + __ETHTOOL_LINK_MODE_MASK_NBITS, + link_mode_names, compact); + if (ret < 0) + return ret; + len += ret; + if (!data->peer_empty) { + ret = ethnl_bitset_size(ksettings->link_modes.lp_advertising, + NULL, __ETHTOOL_LINK_MODE_MASK_NBITS, + link_mode_names, compact); + if (ret < 0) + return ret; + len += ret; + } + + return len; +} + +static int linkmodes_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct linkmodes_reply_data *data = LINKMODES_REPDATA(reply_base); + const struct ethtool_link_ksettings *ksettings = &data->ksettings; + const struct ethtool_link_settings *lsettings = &ksettings->base; + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + int ret; + + if (nla_put_u8(skb, ETHTOOL_A_LINKMODES_AUTONEG, lsettings->autoneg)) + return -EMSGSIZE; + + ret = ethnl_put_bitset(skb, ETHTOOL_A_LINKMODES_OURS, + ksettings->link_modes.advertising, + ksettings->link_modes.supported, + __ETHTOOL_LINK_MODE_MASK_NBITS, link_mode_names, + compact); + if (ret < 0) + return -EMSGSIZE; + if (!data->peer_empty) { + ret = ethnl_put_bitset(skb, ETHTOOL_A_LINKMODES_PEER, + ksettings->link_modes.lp_advertising, + NULL, __ETHTOOL_LINK_MODE_MASK_NBITS, + link_mode_names, compact); + if (ret < 0) + return -EMSGSIZE; + } + + if (nla_put_u32(skb, ETHTOOL_A_LINKMODES_SPEED, lsettings->speed) || + nla_put_u8(skb, ETHTOOL_A_LINKMODES_DUPLEX, lsettings->duplex)) + return -EMSGSIZE; + + return 0; +} + +const struct ethnl_request_ops ethnl_linkmodes_request_ops = { + .request_cmd = ETHTOOL_MSG_LINKMODES_GET, + .reply_cmd = ETHTOOL_MSG_LINKMODES_GET_REPLY, + .hdr_attr = ETHTOOL_A_LINKMODES_HEADER, + .max_attr = ETHTOOL_A_LINKMODES_MAX, + .req_info_size = sizeof(struct linkmodes_req_info), + .reply_data_size = sizeof(struct linkmodes_reply_data), + .request_policy = linkmodes_get_policy, + + .prepare_data = linkmodes_prepare_data, + .reply_size = linkmodes_reply_size, + .fill_reply = linkmodes_fill_reply, +}; diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 942da4ebdfe9..703ff3a227a4 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -209,6 +209,7 @@ static const struct ethnl_request_ops * ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_STRSET_GET] = ðnl_strset_request_ops, [ETHTOOL_MSG_LINKINFO_GET] = ðnl_linkinfo_request_ops, + [ETHTOOL_MSG_LINKMODES_GET] = ðnl_linkmodes_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -630,6 +631,13 @@ static const struct genl_ops ethtool_genl_ops[] = { .flags = GENL_UNS_ADMIN_PERM, .doit = ethnl_set_linkinfo, }, + { + .cmd = ETHTOOL_MSG_LINKMODES_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 5d56d7779a06..256d38972d1e 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -332,6 +332,7 @@ struct ethnl_request_ops { extern const struct ethnl_request_ops ethnl_strset_request_ops; extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; +extern const struct ethnl_request_ops ethnl_linkmodes_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); -- cgit v1.2.3-71-gd317 From bfbcfe2032e70bd8598d680d39ac177d507e39ac Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:56:13 +0100 Subject: ethtool: set link modes related data with LINKMODES_SET request Implement LINKMODES_SET netlink request to set advertised linkmodes and related attributes as ETHTOOL_SLINKSETTINGS and ETHTOOL_SSET commands do. The request allows setting autonegotiation flag, speed, duplex and advertised link modes. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 27 +++ include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/linkmodes.c | 235 +++++++++++++++++++++++++++ net/ethtool/netlink.c | 5 + net/ethtool/netlink.h | 1 + 5 files changed, 269 insertions(+) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 04c55be0264c..625c80183563 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -183,6 +183,7 @@ Userspace to kernel: ``ETHTOOL_MSG_LINKINFO_GET`` get link settings ``ETHTOOL_MSG_LINKINFO_SET`` set link settings ``ETHTOOL_MSG_LINKMODES_GET`` get link modes info + ``ETHTOOL_MSG_LINKMODES_SET`` set link modes info ===================================== ================================ Kernel to userspace: @@ -366,6 +367,30 @@ list. devices supporting the request). +LINKMODES_SET +============= + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKMODES_HEADER`` nested request header + ``ETHTOOL_A_LINKMODES_AUTONEG`` u8 autonegotiation status + ``ETHTOOL_A_LINKMODES_OURS`` bitset advertised link modes + ``ETHTOOL_A_LINKMODES_PEER`` bitset partner link modes + ``ETHTOOL_A_LINKMODES_SPEED`` u32 link speed (Mb/s) + ``ETHTOOL_A_LINKMODES_DUPLEX`` u8 duplex mode + ==================================== ====== ========================== + +``ETHTOOL_A_LINKMODES_OURS`` bit set allows setting advertised link modes. If +autonegotiation is on (either set now or kept from before), advertised modes +are not changed (no ``ETHTOOL_A_LINKMODES_OURS`` attribute) and at least one +of speed and duplex is specified, kernel adjusts advertised modes to all +supported modes matching speed, duplex or both (whatever is specified). This +autoselection is done on ethtool side with ioctl interface, netlink interface +is supposed to allow requesting changes without knowing what exactly kernel +supports. + + Request translation =================== @@ -379,6 +404,7 @@ have their netlink replacement yet. ``ETHTOOL_GSET`` ``ETHTOOL_MSG_LINKINFO_GET`` ``ETHTOOL_MSG_LINKMODES_GET`` ``ETHTOOL_SSET`` ``ETHTOOL_MSG_LINKINFO_SET`` + ``ETHTOOL_MSG_LINKMODES_SET`` ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a ``ETHTOOL_GWOL`` n/a @@ -454,6 +480,7 @@ have their netlink replacement yet. ``ETHTOOL_GLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_GET`` ``ETHTOOL_MSG_LINKMODES_GET`` ``ETHTOOL_SLINKSETTINGS`` ``ETHTOOL_MSG_LINKINFO_SET`` + ``ETHTOOL_MSG_LINKMODES_SET`` ``ETHTOOL_PHY_GTUNABLE`` n/a ``ETHTOOL_PHY_STUNABLE`` n/a ``ETHTOOL_GFECPARAM`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index dc1cae052eee..cddf978b98df 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -18,6 +18,7 @@ enum { ETHTOOL_MSG_LINKINFO_GET, ETHTOOL_MSG_LINKINFO_SET, ETHTOOL_MSG_LINKMODES_GET, + ETHTOOL_MSG_LINKMODES_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, diff --git a/net/ethtool/linkmodes.c b/net/ethtool/linkmodes.c index 81856fa1e632..790b60771d0e 100644 --- a/net/ethtool/linkmodes.c +++ b/net/ethtool/linkmodes.c @@ -138,3 +138,238 @@ const struct ethnl_request_ops ethnl_linkmodes_request_ops = { .reply_size = linkmodes_reply_size, .fill_reply = linkmodes_fill_reply, }; + +/* LINKMODES_SET */ + +struct link_mode_info { + int speed; + u8 duplex; +}; + +#define __DEFINE_LINK_MODE_PARAMS(_speed, _type, _duplex) \ + [ETHTOOL_LINK_MODE(_speed, _type, _duplex)] = { \ + .speed = SPEED_ ## _speed, \ + .duplex = __DUPLEX_ ## _duplex \ + } +#define __DUPLEX_Half DUPLEX_HALF +#define __DUPLEX_Full DUPLEX_FULL +#define __DEFINE_SPECIAL_MODE_PARAMS(_mode) \ + [ETHTOOL_LINK_MODE_ ## _mode ## _BIT] = { \ + .speed = SPEED_UNKNOWN, \ + .duplex = DUPLEX_UNKNOWN, \ + } + +static const struct link_mode_info link_mode_params[] = { + __DEFINE_LINK_MODE_PARAMS(10, T, Half), + __DEFINE_LINK_MODE_PARAMS(10, T, Full), + __DEFINE_LINK_MODE_PARAMS(100, T, Half), + __DEFINE_LINK_MODE_PARAMS(100, T, Full), + __DEFINE_LINK_MODE_PARAMS(1000, T, Half), + __DEFINE_LINK_MODE_PARAMS(1000, T, Full), + __DEFINE_SPECIAL_MODE_PARAMS(Autoneg), + __DEFINE_SPECIAL_MODE_PARAMS(TP), + __DEFINE_SPECIAL_MODE_PARAMS(AUI), + __DEFINE_SPECIAL_MODE_PARAMS(MII), + __DEFINE_SPECIAL_MODE_PARAMS(FIBRE), + __DEFINE_SPECIAL_MODE_PARAMS(BNC), + __DEFINE_LINK_MODE_PARAMS(10000, T, Full), + __DEFINE_SPECIAL_MODE_PARAMS(Pause), + __DEFINE_SPECIAL_MODE_PARAMS(Asym_Pause), + __DEFINE_LINK_MODE_PARAMS(2500, X, Full), + __DEFINE_SPECIAL_MODE_PARAMS(Backplane), + __DEFINE_LINK_MODE_PARAMS(1000, KX, Full), + __DEFINE_LINK_MODE_PARAMS(10000, KX4, Full), + __DEFINE_LINK_MODE_PARAMS(10000, KR, Full), + [ETHTOOL_LINK_MODE_10000baseR_FEC_BIT] = { + .speed = SPEED_10000, + .duplex = DUPLEX_FULL, + }, + __DEFINE_LINK_MODE_PARAMS(20000, MLD2, Full), + __DEFINE_LINK_MODE_PARAMS(20000, KR2, Full), + __DEFINE_LINK_MODE_PARAMS(40000, KR4, Full), + __DEFINE_LINK_MODE_PARAMS(40000, CR4, Full), + __DEFINE_LINK_MODE_PARAMS(40000, SR4, Full), + __DEFINE_LINK_MODE_PARAMS(40000, LR4, Full), + __DEFINE_LINK_MODE_PARAMS(56000, KR4, Full), + __DEFINE_LINK_MODE_PARAMS(56000, CR4, Full), + __DEFINE_LINK_MODE_PARAMS(56000, SR4, Full), + __DEFINE_LINK_MODE_PARAMS(56000, LR4, Full), + __DEFINE_LINK_MODE_PARAMS(25000, CR, Full), + __DEFINE_LINK_MODE_PARAMS(25000, KR, Full), + __DEFINE_LINK_MODE_PARAMS(25000, SR, Full), + __DEFINE_LINK_MODE_PARAMS(50000, CR2, Full), + __DEFINE_LINK_MODE_PARAMS(50000, KR2, Full), + __DEFINE_LINK_MODE_PARAMS(100000, KR4, Full), + __DEFINE_LINK_MODE_PARAMS(100000, SR4, Full), + __DEFINE_LINK_MODE_PARAMS(100000, CR4, Full), + __DEFINE_LINK_MODE_PARAMS(100000, LR4_ER4, Full), + __DEFINE_LINK_MODE_PARAMS(50000, SR2, Full), + __DEFINE_LINK_MODE_PARAMS(1000, X, Full), + __DEFINE_LINK_MODE_PARAMS(10000, CR, Full), + __DEFINE_LINK_MODE_PARAMS(10000, SR, Full), + __DEFINE_LINK_MODE_PARAMS(10000, LR, Full), + __DEFINE_LINK_MODE_PARAMS(10000, LRM, Full), + __DEFINE_LINK_MODE_PARAMS(10000, ER, Full), + __DEFINE_LINK_MODE_PARAMS(2500, T, Full), + __DEFINE_LINK_MODE_PARAMS(5000, T, Full), + __DEFINE_SPECIAL_MODE_PARAMS(FEC_NONE), + __DEFINE_SPECIAL_MODE_PARAMS(FEC_RS), + __DEFINE_SPECIAL_MODE_PARAMS(FEC_BASER), + __DEFINE_LINK_MODE_PARAMS(50000, KR, Full), + __DEFINE_LINK_MODE_PARAMS(50000, SR, Full), + __DEFINE_LINK_MODE_PARAMS(50000, CR, Full), + __DEFINE_LINK_MODE_PARAMS(50000, LR_ER_FR, Full), + __DEFINE_LINK_MODE_PARAMS(50000, DR, Full), + __DEFINE_LINK_MODE_PARAMS(100000, KR2, Full), + __DEFINE_LINK_MODE_PARAMS(100000, SR2, Full), + __DEFINE_LINK_MODE_PARAMS(100000, CR2, Full), + __DEFINE_LINK_MODE_PARAMS(100000, LR2_ER2_FR2, Full), + __DEFINE_LINK_MODE_PARAMS(100000, DR2, Full), + __DEFINE_LINK_MODE_PARAMS(200000, KR4, Full), + __DEFINE_LINK_MODE_PARAMS(200000, SR4, Full), + __DEFINE_LINK_MODE_PARAMS(200000, LR4_ER4_FR4, Full), + __DEFINE_LINK_MODE_PARAMS(200000, DR4, Full), + __DEFINE_LINK_MODE_PARAMS(200000, CR4, Full), + __DEFINE_LINK_MODE_PARAMS(100, T1, Full), + __DEFINE_LINK_MODE_PARAMS(1000, T1, Full), + __DEFINE_LINK_MODE_PARAMS(400000, KR8, Full), + __DEFINE_LINK_MODE_PARAMS(400000, SR8, Full), + __DEFINE_LINK_MODE_PARAMS(400000, LR8_ER8_FR8, Full), + __DEFINE_LINK_MODE_PARAMS(400000, DR8, Full), + __DEFINE_LINK_MODE_PARAMS(400000, CR8, Full), +}; + +static const struct nla_policy +linkmodes_set_policy[ETHTOOL_A_LINKMODES_MAX + 1] = { + [ETHTOOL_A_LINKMODES_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKMODES_AUTONEG] = { .type = NLA_U8 }, + [ETHTOOL_A_LINKMODES_OURS] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKMODES_PEER] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKMODES_SPEED] = { .type = NLA_U32 }, + [ETHTOOL_A_LINKMODES_DUPLEX] = { .type = NLA_U8 }, +}; + +/* Set advertised link modes to all supported modes matching requested speed + * and duplex values. Called when autonegotiation is on, speed or duplex is + * requested but no link mode change. This is done in userspace with ioctl() + * interface, move it into kernel for netlink. + * Returns true if advertised modes bitmap was modified. + */ +static bool ethnl_auto_linkmodes(struct ethtool_link_ksettings *ksettings, + bool req_speed, bool req_duplex) +{ + unsigned long *advertising = ksettings->link_modes.advertising; + unsigned long *supported = ksettings->link_modes.supported; + DECLARE_BITMAP(old_adv, __ETHTOOL_LINK_MODE_MASK_NBITS); + unsigned int i; + + BUILD_BUG_ON(ARRAY_SIZE(link_mode_params) != + __ETHTOOL_LINK_MODE_MASK_NBITS); + + bitmap_copy(old_adv, advertising, __ETHTOOL_LINK_MODE_MASK_NBITS); + + for (i = 0; i < __ETHTOOL_LINK_MODE_MASK_NBITS; i++) { + const struct link_mode_info *info = &link_mode_params[i]; + + if (info->speed == SPEED_UNKNOWN) + continue; + if (test_bit(i, supported) && + (!req_speed || info->speed == ksettings->base.speed) && + (!req_duplex || info->duplex == ksettings->base.duplex)) + set_bit(i, advertising); + else + clear_bit(i, advertising); + } + + return !bitmap_equal(old_adv, advertising, + __ETHTOOL_LINK_MODE_MASK_NBITS); +} + +static int ethnl_update_linkmodes(struct genl_info *info, struct nlattr **tb, + struct ethtool_link_ksettings *ksettings, + bool *mod) +{ + struct ethtool_link_settings *lsettings = &ksettings->base; + bool req_speed, req_duplex; + int ret; + + *mod = false; + req_speed = tb[ETHTOOL_A_LINKMODES_SPEED]; + req_duplex = tb[ETHTOOL_A_LINKMODES_DUPLEX]; + + ethnl_update_u8(&lsettings->autoneg, tb[ETHTOOL_A_LINKMODES_AUTONEG], + mod); + ret = ethnl_update_bitset(ksettings->link_modes.advertising, + __ETHTOOL_LINK_MODE_MASK_NBITS, + tb[ETHTOOL_A_LINKMODES_OURS], link_mode_names, + info->extack, mod); + if (ret < 0) + return ret; + ethnl_update_u32(&lsettings->speed, tb[ETHTOOL_A_LINKMODES_SPEED], + mod); + ethnl_update_u8(&lsettings->duplex, tb[ETHTOOL_A_LINKMODES_DUPLEX], + mod); + + if (!tb[ETHTOOL_A_LINKMODES_OURS] && lsettings->autoneg && + (req_speed || req_duplex) && + ethnl_auto_linkmodes(ksettings, req_speed, req_duplex)) + *mod = true; + + return 0; +} + +int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info) +{ + struct nlattr *tb[ETHTOOL_A_LINKMODES_MAX + 1]; + struct ethtool_link_ksettings ksettings = {}; + struct ethtool_link_settings *lsettings; + struct ethnl_req_info req_info = {}; + struct net_device *dev; + bool mod = false; + int ret; + + ret = nlmsg_parse(info->nlhdr, GENL_HDRLEN, tb, + ETHTOOL_A_LINKMODES_MAX, linkmodes_set_policy, + info->extack); + if (ret < 0) + return ret; + ret = ethnl_parse_header(&req_info, tb[ETHTOOL_A_LINKMODES_HEADER], + genl_info_net(info), info->extack, true); + if (ret < 0) + return ret; + dev = req_info.dev; + if (!dev->ethtool_ops->get_link_ksettings || + !dev->ethtool_ops->set_link_ksettings) + return -EOPNOTSUPP; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + ret = __ethtool_get_link_ksettings(dev, &ksettings); + if (ret < 0) { + if (info) + GENL_SET_ERR_MSG(info, "failed to retrieve link settings"); + goto out_ops; + } + lsettings = &ksettings.base; + + ret = ethnl_update_linkmodes(info, tb, &ksettings, &mod); + if (ret < 0) + goto out_ops; + + if (mod) { + ret = dev->ethtool_ops->set_link_ksettings(dev, &ksettings); + if (ret < 0) + GENL_SET_ERR_MSG(info, "link settings update failed"); + } + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); + dev_put(dev); + return ret; +} diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 703ff3a227a4..5f28f3cb022d 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -638,6 +638,11 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_LINKMODES_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_linkmodes, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 256d38972d1e..1269cca8a002 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -335,5 +335,6 @@ extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; extern const struct ethnl_request_ops ethnl_linkmodes_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); +int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); #endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From 1b1b1847c8505df2e1dd8804838526bed22a8bd4 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:56:18 +0100 Subject: ethtool: add LINKMODES_NTF notification Send ETHTOOL_MSG_LINKMODES_NTF notification message whenever device link settings or advertised modes are modified using ETHTOOL_MSG_LINKMODES_SET netlink message or ETHTOOL_SLINKSETTINGS or ETHTOOL_SSET ioctl commands. The notification message has the same format as reply to LINKMODES_GET request. ETHTOOL_MSG_LINKMODES_SET netlink request only triggers the notification if there is a change but the ioctl command handlers do not check if there is an actual change and trigger the notification whenever the commands are executed. As all work is done by ethnl_default_notify() handler and callback functions introduced to handle LINKMODES_GET requests, all that remains is adding entries for ETHTOOL_MSG_LINKMODES_NTF into ethnl_notify_handlers and ethnl_default_notify_ops lookup tables and calls to ethtool_notify() where needed. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 1 + include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/ioctl.c | 8 ++++++-- net/ethtool/linkmodes.c | 2 ++ net/ethtool/netlink.c | 2 ++ 5 files changed, 12 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 625c80183563..9d96d51e9360 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -193,6 +193,7 @@ Kernel to userspace: ``ETHTOOL_MSG_LINKINFO_GET_REPLY`` link settings ``ETHTOOL_MSG_LINKINFO_NTF`` link settings notification ``ETHTOOL_MSG_LINKMODES_GET_REPLY`` link modes info + ``ETHTOOL_MSG_LINKMODES_NTF`` link modes notification ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index cddf978b98df..35948df6d6e3 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -32,6 +32,7 @@ enum { ETHTOOL_MSG_LINKINFO_GET_REPLY, ETHTOOL_MSG_LINKINFO_NTF, ETHTOOL_MSG_LINKMODES_GET_REPLY, + ETHTOOL_MSG_LINKMODES_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 11a467294a33..36e2ef2d900d 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -573,8 +573,10 @@ static int ethtool_set_link_ksettings(struct net_device *dev, return -EINVAL; err = dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); - if (err >= 0) + if (err >= 0) { ethtool_notify(dev, ETHTOOL_MSG_LINKINFO_NTF, NULL); + ethtool_notify(dev, ETHTOOL_MSG_LINKMODES_NTF, NULL); + } return err; } @@ -638,8 +640,10 @@ static int ethtool_set_settings(struct net_device *dev, void __user *useraddr) link_ksettings.base.link_mode_masks_nwords = __ETHTOOL_LINK_MODE_MASK_NU32; ret = dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings); - if (ret >= 0) + if (ret >= 0) { ethtool_notify(dev, ETHTOOL_MSG_LINKINFO_NTF, NULL); + ethtool_notify(dev, ETHTOOL_MSG_LINKMODES_NTF, NULL); + } return ret; } diff --git a/net/ethtool/linkmodes.c b/net/ethtool/linkmodes.c index 790b60771d0e..0b99f494ad3b 100644 --- a/net/ethtool/linkmodes.c +++ b/net/ethtool/linkmodes.c @@ -364,6 +364,8 @@ int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info) ret = dev->ethtool_ops->set_link_ksettings(dev, &ksettings); if (ret < 0) GENL_SET_ERR_MSG(info, "link settings update failed"); + else + ethtool_notify(dev, ETHTOOL_MSG_LINKMODES_NTF, NULL); } out_ops: diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 5f28f3cb022d..1b5e1bd26504 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -511,6 +511,7 @@ static int ethnl_default_done(struct netlink_callback *cb) static const struct ethnl_request_ops * ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { [ETHTOOL_MSG_LINKINFO_NTF] = ðnl_linkinfo_request_ops, + [ETHTOOL_MSG_LINKMODES_NTF] = ðnl_linkmodes_request_ops, }; /* default notification handler */ @@ -592,6 +593,7 @@ typedef void (*ethnl_notify_handler_t)(struct net_device *dev, unsigned int cmd, static const ethnl_notify_handler_t ethnl_notify_handlers[] = { [ETHTOOL_MSG_LINKINFO_NTF] = ethnl_default_notify, + [ETHTOOL_MSG_LINKMODES_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) -- cgit v1.2.3-71-gd317 From 3d2b847fb99cf2b28aa046e486636e555bc6ed1c Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Fri, 27 Dec 2019 15:56:23 +0100 Subject: ethtool: provide link state with LINKSTATE_GET request Implement LINKSTATE_GET netlink request to get link state information. At the moment, only link up flag as provided by ETHTOOL_GLINK ioctl command is returned. LINKSTATE_GET request can be used with NLM_F_DUMP (without device identification) to request the information for all devices in current network namespace providing the data. Signed-off-by: Michal Kubecek Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 33 ++++++++++++- include/uapi/linux/ethtool_netlink.h | 14 ++++++ net/ethtool/Makefile | 3 +- net/ethtool/common.c | 8 +++ net/ethtool/common.h | 3 ++ net/ethtool/ioctl.c | 8 +-- net/ethtool/linkstate.c | 74 ++++++++++++++++++++++++++++ net/ethtool/netlink.c | 8 +++ net/ethtool/netlink.h | 1 + 9 files changed, 146 insertions(+), 6 deletions(-) create mode 100644 net/ethtool/linkstate.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 9d96d51e9360..c60afba69e3c 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -184,6 +184,7 @@ Userspace to kernel: ``ETHTOOL_MSG_LINKINFO_SET`` set link settings ``ETHTOOL_MSG_LINKMODES_GET`` get link modes info ``ETHTOOL_MSG_LINKMODES_SET`` set link modes info + ``ETHTOOL_MSG_LINKSTATE_GET`` get link state ===================================== ================================ Kernel to userspace: @@ -194,6 +195,7 @@ Kernel to userspace: ``ETHTOOL_MSG_LINKINFO_NTF`` link settings notification ``ETHTOOL_MSG_LINKMODES_GET_REPLY`` link modes info ``ETHTOOL_MSG_LINKMODES_NTF`` link modes notification + ``ETHTOOL_MSG_LINKSTATE_GET_REPLY`` link state info ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device @@ -392,6 +394,35 @@ is supposed to allow requesting changes without knowing what exactly kernel supports. +LINKSTATE_GET +============= + +Requests link state information. At the moment, only link up/down flag (as +provided by ``ETHTOOL_GLINK`` ioctl command) is provided but some future +extensions are planned (e.g. link down reason). This request does not have any +attributes. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKSTATE_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_LINKSTATE_HEADER`` nested reply header + ``ETHTOOL_A_LINKSTATE_LINK`` bool link state (up/down) + ==================================== ====== ========================== + +For most NIC drivers, the value of ``ETHTOOL_A_LINKSTATE_LINK`` returns +carrier flag provided by ``netif_carrier_ok()`` but there are drivers which +define their own handler. + +``LINKSTATE_GET`` allows dump requests (kernel returns reply messages for all +devices supporting the request). + + Request translation =================== @@ -413,7 +444,7 @@ have their netlink replacement yet. ``ETHTOOL_GMSGLVL`` n/a ``ETHTOOL_SMSGLVL`` n/a ``ETHTOOL_NWAY_RST`` n/a - ``ETHTOOL_GLINK`` n/a + ``ETHTOOL_GLINK`` ``ETHTOOL_MSG_LINKSTATE_GET`` ``ETHTOOL_GEEPROM`` n/a ``ETHTOOL_SEEPROM`` n/a ``ETHTOOL_GCOALESCE`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 35948df6d6e3..02f82f42a889 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -19,6 +19,7 @@ enum { ETHTOOL_MSG_LINKINFO_SET, ETHTOOL_MSG_LINKMODES_GET, ETHTOOL_MSG_LINKMODES_SET, + ETHTOOL_MSG_LINKSTATE_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -33,6 +34,7 @@ enum { ETHTOOL_MSG_LINKINFO_NTF, ETHTOOL_MSG_LINKMODES_GET_REPLY, ETHTOOL_MSG_LINKMODES_NTF, + ETHTOOL_MSG_LINKSTATE_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -181,6 +183,18 @@ enum { ETHTOOL_A_LINKMODES_MAX = __ETHTOOL_A_LINKMODES_CNT - 1 }; +/* LINKSTATE */ + +enum { + ETHTOOL_A_LINKSTATE_UNSPEC, + ETHTOOL_A_LINKSTATE_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_LINKSTATE_LINK, /* u8 */ + + /* add new constants above here */ + __ETHTOOL_A_LINKSTATE_CNT, + ETHTOOL_A_LINKSTATE_MAX = __ETHTOOL_A_LINKSTATE_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index 8023da6672ce..9a1332fb0cc6 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -4,4 +4,5 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o -ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o +ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ + linkstate.o diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 1d4a0aeff2cb..e621b1694d2f 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -217,3 +217,11 @@ convert_legacy_settings_to_link_ksettings( = legacy_settings->eth_tp_mdix_ctrl; return retval; } + +int __ethtool_get_link(struct net_device *dev) +{ + if (!dev->ethtool_ops->get_link) + return -EOPNOTSUPP; + + return netif_running(dev) && dev->ethtool_ops->get_link(dev); +} diff --git a/net/ethtool/common.h b/net/ethtool/common.h index c8a237402729..5c5f7dc90cd4 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -3,6 +3,7 @@ #ifndef _ETHTOOL_COMMON_H #define _ETHTOOL_COMMON_H +#include #include /* compose link mode index from speed, type and duplex */ @@ -19,6 +20,8 @@ extern const char phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char link_mode_names[][ETH_GSTRING_LEN]; +int __ethtool_get_link(struct net_device *dev); + bool convert_legacy_settings_to_link_ksettings( struct ethtool_link_ksettings *link_ksettings, const struct ethtool_cmd *legacy_settings); diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 36e2ef2d900d..182bffbffa78 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -1365,12 +1365,12 @@ static int ethtool_nway_reset(struct net_device *dev) static int ethtool_get_link(struct net_device *dev, char __user *useraddr) { struct ethtool_value edata = { .cmd = ETHTOOL_GLINK }; + int link = __ethtool_get_link(dev); - if (!dev->ethtool_ops->get_link) - return -EOPNOTSUPP; - - edata.data = netif_running(dev) && dev->ethtool_ops->get_link(dev); + if (link < 0) + return link; + edata.data = link; if (copy_to_user(useraddr, &edata, sizeof(edata))) return -EFAULT; return 0; diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c new file mode 100644 index 000000000000..2740cde0a182 --- /dev/null +++ b/net/ethtool/linkstate.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" + +struct linkstate_req_info { + struct ethnl_req_info base; +}; + +struct linkstate_reply_data { + struct ethnl_reply_data base; + int link; +}; + +#define LINKSTATE_REPDATA(__reply_base) \ + container_of(__reply_base, struct linkstate_reply_data, base) + +static const struct nla_policy +linkstate_get_policy[ETHTOOL_A_LINKSTATE_MAX + 1] = { + [ETHTOOL_A_LINKSTATE_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_LINKSTATE_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_LINKSTATE_LINK] = { .type = NLA_REJECT }, +}; + +static int linkstate_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct linkstate_reply_data *data = LINKSTATE_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + data->link = __ethtool_get_link(dev); + ethnl_ops_complete(dev); + + return 0; +} + +static int linkstate_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + return nla_total_size(sizeof(u8)) /* LINKSTATE_LINK */ + + 0; +} + +static int linkstate_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + struct linkstate_reply_data *data = LINKSTATE_REPDATA(reply_base); + + if (data->link >= 0 && + nla_put_u8(skb, ETHTOOL_A_LINKSTATE_LINK, !!data->link)) + return -EMSGSIZE; + + return 0; +} + +const struct ethnl_request_ops ethnl_linkstate_request_ops = { + .request_cmd = ETHTOOL_MSG_LINKSTATE_GET, + .reply_cmd = ETHTOOL_MSG_LINKSTATE_GET_REPLY, + .hdr_attr = ETHTOOL_A_LINKSTATE_HEADER, + .max_attr = ETHTOOL_A_LINKSTATE_MAX, + .req_info_size = sizeof(struct linkstate_req_info), + .reply_data_size = sizeof(struct linkstate_reply_data), + .request_policy = linkstate_get_policy, + + .prepare_data = linkstate_prepare_data, + .reply_size = linkstate_reply_size, + .fill_reply = linkstate_fill_reply, +}; diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 1b5e1bd26504..4ca96c7b86b3 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -210,6 +210,7 @@ ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_STRSET_GET] = ðnl_strset_request_ops, [ETHTOOL_MSG_LINKINFO_GET] = ðnl_linkinfo_request_ops, [ETHTOOL_MSG_LINKMODES_GET] = ðnl_linkmodes_request_ops, + [ETHTOOL_MSG_LINKSTATE_GET] = ðnl_linkstate_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -645,6 +646,13 @@ static const struct genl_ops ethtool_genl_ops[] = { .flags = GENL_UNS_ADMIN_PERM, .doit = ethnl_set_linkmodes, }, + { + .cmd = ETHTOOL_MSG_LINKSTATE_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 1269cca8a002..da9d6521a4eb 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -333,6 +333,7 @@ struct ethnl_request_ops { extern const struct ethnl_request_ops ethnl_strset_request_ops; extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; extern const struct ethnl_request_ops ethnl_linkmodes_request_ops; +extern const struct ethnl_request_ops ethnl_linkstate_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); -- cgit v1.2.3-71-gd317 From 544fed47af4d2174ac0b550e9c8da15c2dfdb117 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Fri, 27 Dec 2019 15:02:28 +0200 Subject: ptp: introduce ptp_cancel_worker_sync In order to effectively use the PTP kernel thread for tasks such as timestamping packets, allow the user control over stopping it, which is needed e.g. when the timestamping queues must be drained. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- drivers/ptp/ptp_clock.c | 6 ++++++ include/linux/ptp_clock_kernel.h | 9 +++++++++ 2 files changed, 15 insertions(+) (limited to 'include') diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c index e60eab7f8a61..4f0d91a76dcb 100644 --- a/drivers/ptp/ptp_clock.c +++ b/drivers/ptp/ptp_clock.c @@ -371,6 +371,12 @@ int ptp_schedule_worker(struct ptp_clock *ptp, unsigned long delay) } EXPORT_SYMBOL(ptp_schedule_worker); +void ptp_cancel_worker_sync(struct ptp_clock *ptp) +{ + kthread_cancel_delayed_work_sync(&ptp->aux_work); +} +EXPORT_SYMBOL(ptp_cancel_worker_sync); + /* module operations */ static void __exit ptp_exit(void) diff --git a/include/linux/ptp_clock_kernel.h b/include/linux/ptp_clock_kernel.h index 93cc4f1d444a..c64a1ef87240 100644 --- a/include/linux/ptp_clock_kernel.h +++ b/include/linux/ptp_clock_kernel.h @@ -243,6 +243,13 @@ int ptp_find_pin(struct ptp_clock *ptp, int ptp_schedule_worker(struct ptp_clock *ptp, unsigned long delay); +/** + * ptp_cancel_worker_sync() - cancel ptp auxiliary clock + * + * @ptp: The clock obtained from ptp_clock_register(). + */ +void ptp_cancel_worker_sync(struct ptp_clock *ptp); + #else static inline struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, struct device *parent) @@ -260,6 +267,8 @@ static inline int ptp_find_pin(struct ptp_clock *ptp, static inline int ptp_schedule_worker(struct ptp_clock *ptp, unsigned long delay) { return -EOPNOTSUPP; } +static inline void ptp_cancel_worker_sync(struct ptp_clock *ptp) +{ } #endif -- cgit v1.2.3-71-gd317 From 1e762bd278d2a70bc74b9cbee7f1e93bd4704fe2 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Fri, 27 Dec 2019 15:02:29 +0200 Subject: net: dsa: sja1105: Use PTP core's dedicated kernel thread for RX timestamping And move the queue of skb's waiting for RX timestamps into the ptp_data structure, since it isn't needed if PTP is not compiled. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- drivers/net/dsa/sja1105/sja1105_ptp.c | 33 +++++++++++++++------------------ drivers/net/dsa/sja1105/sja1105_ptp.h | 1 + include/linux/dsa/sja1105.h | 2 -- 3 files changed, 16 insertions(+), 20 deletions(-) (limited to 'include') diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c index 54258a25031d..387b22a86a53 100644 --- a/drivers/net/dsa/sja1105/sja1105_ptp.c +++ b/drivers/net/dsa/sja1105/sja1105_ptp.c @@ -367,22 +367,16 @@ static int sja1105_ptpclkval_write(struct sja1105_private *priv, u64 ticks, ptp_sts); } -#define rxtstamp_to_tagger(d) \ - container_of((d), struct sja1105_tagger_data, rxtstamp_work) -#define tagger_to_sja1105(d) \ - container_of((d), struct sja1105_private, tagger_data) - -static void sja1105_rxtstamp_work(struct work_struct *work) +static long sja1105_rxtstamp_work(struct ptp_clock_info *ptp) { - struct sja1105_tagger_data *tagger_data = rxtstamp_to_tagger(work); - struct sja1105_private *priv = tagger_to_sja1105(tagger_data); - struct sja1105_ptp_data *ptp_data = &priv->ptp_data; + struct sja1105_ptp_data *ptp_data = ptp_caps_to_data(ptp); + struct sja1105_private *priv = ptp_data_to_sja1105(ptp_data); struct dsa_switch *ds = priv->ds; struct sk_buff *skb; mutex_lock(&ptp_data->lock); - while ((skb = skb_dequeue(&tagger_data->skb_rxtstamp_queue)) != NULL) { + while ((skb = skb_dequeue(&ptp_data->skb_rxtstamp_queue)) != NULL) { struct skb_shared_hwtstamps *shwt = skb_hwtstamps(skb); u64 ticks, ts; int rc; @@ -404,6 +398,9 @@ static void sja1105_rxtstamp_work(struct work_struct *work) } mutex_unlock(&ptp_data->lock); + + /* Don't restart */ + return -1; } /* Called from dsa_skb_defer_rx_timestamp */ @@ -411,16 +408,16 @@ bool sja1105_port_rxtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb, unsigned int type) { struct sja1105_private *priv = ds->priv; - struct sja1105_tagger_data *tagger_data = &priv->tagger_data; + struct sja1105_ptp_data *ptp_data = &priv->ptp_data; - if (!test_bit(SJA1105_HWTS_RX_EN, &tagger_data->state)) + if (!test_bit(SJA1105_HWTS_RX_EN, &priv->tagger_data.state)) return false; /* We need to read the full PTP clock to reconstruct the Rx * timestamp. For that we need a sleepable context. */ - skb_queue_tail(&tagger_data->skb_rxtstamp_queue, skb); - schedule_work(&tagger_data->rxtstamp_work); + skb_queue_tail(&ptp_data->skb_rxtstamp_queue, skb); + ptp_schedule_worker(ptp_data->clock, 0); return true; } @@ -628,11 +625,11 @@ int sja1105_ptp_clock_register(struct dsa_switch *ds) .adjtime = sja1105_ptp_adjtime, .gettimex64 = sja1105_ptp_gettimex, .settime64 = sja1105_ptp_settime, + .do_aux_work = sja1105_rxtstamp_work, .max_adj = SJA1105_MAX_ADJ_PPB, }; - skb_queue_head_init(&tagger_data->skb_rxtstamp_queue); - INIT_WORK(&tagger_data->rxtstamp_work, sja1105_rxtstamp_work); + skb_queue_head_init(&ptp_data->skb_rxtstamp_queue); spin_lock_init(&tagger_data->meta_lock); ptp_data->clock = ptp_clock_register(&ptp_data->caps, ds->dev); @@ -653,8 +650,8 @@ void sja1105_ptp_clock_unregister(struct dsa_switch *ds) if (IS_ERR_OR_NULL(ptp_data->clock)) return; - cancel_work_sync(&priv->tagger_data.rxtstamp_work); - skb_queue_purge(&priv->tagger_data.skb_rxtstamp_queue); + ptp_cancel_worker_sync(ptp_data->clock); + skb_queue_purge(&ptp_data->skb_rxtstamp_queue); ptp_clock_unregister(ptp_data->clock); ptp_data->clock = NULL; } diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.h b/drivers/net/dsa/sja1105/sja1105_ptp.h index 470f44b76318..6f4a19eec709 100644 --- a/drivers/net/dsa/sja1105/sja1105_ptp.h +++ b/drivers/net/dsa/sja1105/sja1105_ptp.h @@ -30,6 +30,7 @@ struct sja1105_ptp_cmd { }; struct sja1105_ptp_data { + struct sk_buff_head skb_rxtstamp_queue; struct ptp_clock_info caps; struct ptp_clock *clock; struct sja1105_ptp_cmd cmd; diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h index 897e799dbcb9..c0b6a603ea8c 100644 --- a/include/linux/dsa/sja1105.h +++ b/include/linux/dsa/sja1105.h @@ -37,8 +37,6 @@ * the structure defined in struct sja1105_private. */ struct sja1105_tagger_data { - struct sk_buff_head skb_rxtstamp_queue; - struct work_struct rxtstamp_work; struct sk_buff *stampable_skb; /* Protects concurrent access to the meta state machine * from taggers running on multiple ports on SMP systems -- cgit v1.2.3-71-gd317 From 68e039f966cb577c91649a02591646ac3919f8c9 Mon Sep 17 00:00:00 2001 From: Sven Eckelmann Date: Wed, 1 Jan 2020 00:00:01 +0100 Subject: batman-adv: Update copyright years for 2020 Signed-off-by: Sven Eckelmann Signed-off-by: Simon Wunderlich --- include/uapi/linux/batadv_packet.h | 2 +- include/uapi/linux/batman_adv.h | 2 +- net/batman-adv/Kconfig | 2 +- net/batman-adv/Makefile | 2 +- net/batman-adv/bat_algo.c | 2 +- net/batman-adv/bat_algo.h | 2 +- net/batman-adv/bat_iv_ogm.c | 2 +- net/batman-adv/bat_iv_ogm.h | 2 +- net/batman-adv/bat_v.c | 2 +- net/batman-adv/bat_v.h | 2 +- net/batman-adv/bat_v_elp.c | 2 +- net/batman-adv/bat_v_elp.h | 2 +- net/batman-adv/bat_v_ogm.c | 2 +- net/batman-adv/bat_v_ogm.h | 2 +- net/batman-adv/bitarray.c | 2 +- net/batman-adv/bitarray.h | 2 +- net/batman-adv/bridge_loop_avoidance.c | 2 +- net/batman-adv/bridge_loop_avoidance.h | 2 +- net/batman-adv/debugfs.c | 2 +- net/batman-adv/debugfs.h | 2 +- net/batman-adv/distributed-arp-table.c | 2 +- net/batman-adv/distributed-arp-table.h | 2 +- net/batman-adv/fragmentation.c | 2 +- net/batman-adv/fragmentation.h | 2 +- net/batman-adv/gateway_client.c | 2 +- net/batman-adv/gateway_client.h | 2 +- net/batman-adv/gateway_common.c | 2 +- net/batman-adv/gateway_common.h | 2 +- net/batman-adv/hard-interface.c | 2 +- net/batman-adv/hard-interface.h | 2 +- net/batman-adv/hash.c | 2 +- net/batman-adv/hash.h | 2 +- net/batman-adv/icmp_socket.c | 2 +- net/batman-adv/icmp_socket.h | 2 +- net/batman-adv/log.c | 2 +- net/batman-adv/log.h | 2 +- net/batman-adv/main.c | 2 +- net/batman-adv/main.h | 2 +- net/batman-adv/multicast.c | 2 +- net/batman-adv/multicast.h | 2 +- net/batman-adv/netlink.c | 2 +- net/batman-adv/netlink.h | 2 +- net/batman-adv/network-coding.c | 2 +- net/batman-adv/network-coding.h | 2 +- net/batman-adv/originator.c | 2 +- net/batman-adv/originator.h | 2 +- net/batman-adv/routing.c | 2 +- net/batman-adv/routing.h | 2 +- net/batman-adv/send.c | 2 +- net/batman-adv/send.h | 2 +- net/batman-adv/soft-interface.c | 2 +- net/batman-adv/soft-interface.h | 2 +- net/batman-adv/sysfs.c | 2 +- net/batman-adv/sysfs.h | 2 +- net/batman-adv/tp_meter.c | 2 +- net/batman-adv/tp_meter.h | 2 +- net/batman-adv/trace.c | 2 +- net/batman-adv/trace.h | 2 +- net/batman-adv/translation-table.c | 2 +- net/batman-adv/translation-table.h | 2 +- net/batman-adv/tvlv.c | 2 +- net/batman-adv/tvlv.h | 2 +- net/batman-adv/types.h | 2 +- 63 files changed, 63 insertions(+), 63 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/batadv_packet.h b/include/uapi/linux/batadv_packet.h index 2a15f01c2243..0ae34c85ef9e 100644 --- a/include/uapi/linux/batadv_packet.h +++ b/include/uapi/linux/batadv_packet.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/include/uapi/linux/batman_adv.h b/include/uapi/linux/batman_adv.h index 67f4636758af..617c180ff0c8 100644 --- a/include/uapi/linux/batman_adv.h +++ b/include/uapi/linux/batman_adv.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: MIT */ -/* Copyright (C) 2016-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2016-2020 B.A.T.M.A.N. contributors: * * Matthias Schiffer */ diff --git a/net/batman-adv/Kconfig b/net/batman-adv/Kconfig index d5028af750d5..045b2b183213 100644 --- a/net/batman-adv/Kconfig +++ b/net/batman-adv/Kconfig @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -# Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +# Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: # # Marek Lindner, Simon Wunderlich diff --git a/net/batman-adv/Makefile b/net/batman-adv/Makefile index fd63e116d9ff..daa49af7ff40 100644 --- a/net/batman-adv/Makefile +++ b/net/batman-adv/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -# Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +# Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: # # Marek Lindner, Simon Wunderlich diff --git a/net/batman-adv/bat_algo.c b/net/batman-adv/bat_algo.c index fa39eaaab9d7..382fbe51fd34 100644 --- a/net/batman-adv/bat_algo.c +++ b/net/batman-adv/bat_algo.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/bat_algo.h b/net/batman-adv/bat_algo.h index 37898da8ad48..686a60bc9492 100644 --- a/net/batman-adv/bat_algo.h +++ b/net/batman-adv/bat_algo.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Linus Lüssing */ diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c index 5b0b20e6da95..f0209505e41a 100644 --- a/net/batman-adv/bat_iv_ogm.c +++ b/net/batman-adv/bat_iv_ogm.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/bat_iv_ogm.h b/net/batman-adv/bat_iv_ogm.h index c7a9ba305bfc..0c57c1000c64 100644 --- a/net/batman-adv/bat_iv_ogm.h +++ b/net/batman-adv/bat_iv_ogm.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/bat_v.c b/net/batman-adv/bat_v.c index 4ff6cf1ecae7..0ecaf1bb0068 100644 --- a/net/batman-adv/bat_v.c +++ b/net/batman-adv/bat_v.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Linus Lüssing, Marek Lindner */ diff --git a/net/batman-adv/bat_v.h b/net/batman-adv/bat_v.h index 37833db098e6..5e0be10bc84e 100644 --- a/net/batman-adv/bat_v.h +++ b/net/batman-adv/bat_v.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Linus Lüssing */ diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c index 6536e8cc53a0..1e3172db7492 100644 --- a/net/batman-adv/bat_v_elp.c +++ b/net/batman-adv/bat_v_elp.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Linus Lüssing, Marek Lindner */ diff --git a/net/batman-adv/bat_v_elp.h b/net/batman-adv/bat_v_elp.h index 1a29505f4f66..4358d436be2a 100644 --- a/net/batman-adv/bat_v_elp.h +++ b/net/batman-adv/bat_v_elp.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Linus Lüssing, Marek Lindner */ diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c index 714ce56cfcc8..969466218999 100644 --- a/net/batman-adv/bat_v_ogm.c +++ b/net/batman-adv/bat_v_ogm.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Antonio Quartulli */ diff --git a/net/batman-adv/bat_v_ogm.h b/net/batman-adv/bat_v_ogm.h index bf16d040461d..0ae2575f70bb 100644 --- a/net/batman-adv/bat_v_ogm.h +++ b/net/batman-adv/bat_v_ogm.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Antonio Quartulli */ diff --git a/net/batman-adv/bitarray.c b/net/batman-adv/bitarray.c index 7f04a6acf14e..4bc695cda397 100644 --- a/net/batman-adv/bitarray.c +++ b/net/batman-adv/bitarray.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2006-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner */ diff --git a/net/batman-adv/bitarray.h b/net/batman-adv/bitarray.h index 84ad2d2b6ac9..533c6d44cb58 100644 --- a/net/batman-adv/bitarray.h +++ b/net/batman-adv/bitarray.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2006-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner */ diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c index 0eff33580f95..41cc87f06b14 100644 --- a/net/batman-adv/bridge_loop_avoidance.c +++ b/net/batman-adv/bridge_loop_avoidance.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich */ diff --git a/net/batman-adv/bridge_loop_avoidance.h b/net/batman-adv/bridge_loop_avoidance.h index 02b24a861a85..41edb2c4a327 100644 --- a/net/batman-adv/bridge_loop_avoidance.h +++ b/net/batman-adv/bridge_loop_avoidance.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich */ diff --git a/net/batman-adv/debugfs.c b/net/batman-adv/debugfs.c index 38c4d8e51155..452856c27d20 100644 --- a/net/batman-adv/debugfs.c +++ b/net/batman-adv/debugfs.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/debugfs.h b/net/batman-adv/debugfs.h index 1c5afd301ce9..7e2e8f586f42 100644 --- a/net/batman-adv/debugfs.h +++ b/net/batman-adv/debugfs.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c index 5004e38fe792..906011b60d66 100644 --- a/net/batman-adv/distributed-arp-table.c +++ b/net/batman-adv/distributed-arp-table.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Antonio Quartulli */ diff --git a/net/batman-adv/distributed-arp-table.h b/net/batman-adv/distributed-arp-table.h index 67c7729add55..2bff2f4a325c 100644 --- a/net/batman-adv/distributed-arp-table.h +++ b/net/batman-adv/distributed-arp-table.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2011-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2011-2020 B.A.T.M.A.N. contributors: * * Antonio Quartulli */ diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c index 385fccdcf69d..7cad97644d05 100644 --- a/net/batman-adv/fragmentation.c +++ b/net/batman-adv/fragmentation.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Martin Hundebøll */ diff --git a/net/batman-adv/fragmentation.h b/net/batman-adv/fragmentation.h index abfe8c6556de..881ef328b6cd 100644 --- a/net/batman-adv/fragmentation.h +++ b/net/batman-adv/fragmentation.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2013-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2013-2020 B.A.T.M.A.N. contributors: * * Martin Hundebøll */ diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c index 47df4c678988..e22e49289677 100644 --- a/net/batman-adv/gateway_client.c +++ b/net/batman-adv/gateway_client.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2009-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/gateway_client.h b/net/batman-adv/gateway_client.h index 0be8e7178ec7..88b5dba84354 100644 --- a/net/batman-adv/gateway_client.h +++ b/net/batman-adv/gateway_client.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2009-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/gateway_common.c b/net/batman-adv/gateway_common.c index fc55750542e4..16cd9450ceb1 100644 --- a/net/batman-adv/gateway_common.c +++ b/net/batman-adv/gateway_common.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2009-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/gateway_common.h b/net/batman-adv/gateway_common.h index 211b14b37db8..c3a0c5a7f7e9 100644 --- a/net/batman-adv/gateway_common.h +++ b/net/batman-adv/gateway_common.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2009-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c index afb52282d5bd..c7e98a40dd33 100644 --- a/net/batman-adv/hard-interface.c +++ b/net/batman-adv/hard-interface.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/hard-interface.h b/net/batman-adv/hard-interface.h index bbb8a6f18d6b..bad2e50135e8 100644 --- a/net/batman-adv/hard-interface.h +++ b/net/batman-adv/hard-interface.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/hash.c b/net/batman-adv/hash.c index a9d4e176f4de..68638e0450a6 100644 --- a/net/batman-adv/hash.c +++ b/net/batman-adv/hash.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2006-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner */ diff --git a/net/batman-adv/hash.h b/net/batman-adv/hash.h index 57877f0b78e0..91ae9f32b580 100644 --- a/net/batman-adv/hash.h +++ b/net/batman-adv/hash.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2006-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2006-2020 B.A.T.M.A.N. contributors: * * Simon Wunderlich, Marek Lindner */ diff --git a/net/batman-adv/icmp_socket.c b/net/batman-adv/icmp_socket.c index 0a70b66e8770..ccb535c77e5d 100644 --- a/net/batman-adv/icmp_socket.c +++ b/net/batman-adv/icmp_socket.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/icmp_socket.h b/net/batman-adv/icmp_socket.h index 27fafff586df..6abd0f4742ef 100644 --- a/net/batman-adv/icmp_socket.h +++ b/net/batman-adv/icmp_socket.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/log.c b/net/batman-adv/log.c index 11941cf1adcc..a67b2b091447 100644 --- a/net/batman-adv/log.c +++ b/net/batman-adv/log.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/log.h b/net/batman-adv/log.h index 41fd900121c5..f9884dc56cf3 100644 --- a/net/batman-adv/log.h +++ b/net/batman-adv/log.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c index 4a89177def64..0b840e1e3dfa 100644 --- a/net/batman-adv/main.c +++ b/net/batman-adv/main.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/main.h b/net/batman-adv/main.h index fd8c0728ddc7..692306df7b6f 100644 --- a/net/batman-adv/main.h +++ b/net/batman-adv/main.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c index f9ec8e7507b6..9ebdc1e864b9 100644 --- a/net/batman-adv/multicast.c +++ b/net/batman-adv/multicast.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2014-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2014-2020 B.A.T.M.A.N. contributors: * * Linus Lüssing */ diff --git a/net/batman-adv/multicast.h b/net/batman-adv/multicast.h index 5d9e2bb29c97..ebf825991ecd 100644 --- a/net/batman-adv/multicast.h +++ b/net/batman-adv/multicast.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2014-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2014-2020 B.A.T.M.A.N. contributors: * * Linus Lüssing */ diff --git a/net/batman-adv/netlink.c b/net/batman-adv/netlink.c index 7e052d6f759b..02ed073f95a9 100644 --- a/net/batman-adv/netlink.c +++ b/net/batman-adv/netlink.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2016-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2016-2020 B.A.T.M.A.N. contributors: * * Matthias Schiffer */ diff --git a/net/batman-adv/netlink.h b/net/batman-adv/netlink.h index ddc674e47dbb..7ee48f916997 100644 --- a/net/batman-adv/netlink.h +++ b/net/batman-adv/netlink.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2016-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2016-2020 B.A.T.M.A.N. contributors: * * Matthias Schiffer */ diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c index 580609389f0f..8f0717c3f7b5 100644 --- a/net/batman-adv/network-coding.c +++ b/net/batman-adv/network-coding.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2012-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2012-2020 B.A.T.M.A.N. contributors: * * Martin Hundebøll, Jeppe Ledet-Pedersen */ diff --git a/net/batman-adv/network-coding.h b/net/batman-adv/network-coding.h index 753fa49723cf..334289084127 100644 --- a/net/batman-adv/network-coding.h +++ b/net/batman-adv/network-coding.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2012-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2012-2020 B.A.T.M.A.N. contributors: * * Martin Hundebøll, Jeppe Ledet-Pedersen */ diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c index 38613487fb1b..5b0c2fffc214 100644 --- a/net/batman-adv/originator.c +++ b/net/batman-adv/originator.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2009-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2009-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h index 512a1f99dd75..7bc01c138b3a 100644 --- a/net/batman-adv/originator.h +++ b/net/batman-adv/originator.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c index f0f864820dea..3632bd976c56 100644 --- a/net/batman-adv/routing.c +++ b/net/batman-adv/routing.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/routing.h b/net/batman-adv/routing.h index c20feac95107..2ed49db6eff5 100644 --- a/net/batman-adv/routing.h +++ b/net/batman-adv/routing.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/send.c b/net/batman-adv/send.c index 3ce5f7bad369..7f8ade04e08e 100644 --- a/net/batman-adv/send.c +++ b/net/batman-adv/send.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/send.h b/net/batman-adv/send.h index 5fc0fd1e5d08..0d36e15589f6 100644 --- a/net/batman-adv/send.h +++ b/net/batman-adv/send.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c index 832e156c519e..5f05a728f347 100644 --- a/net/batman-adv/soft-interface.c +++ b/net/batman-adv/soft-interface.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/soft-interface.h b/net/batman-adv/soft-interface.h index 29139ad769fe..534e08d6ad91 100644 --- a/net/batman-adv/soft-interface.h +++ b/net/batman-adv/soft-interface.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c index e5bbc28ed12c..c45962d8527b 100644 --- a/net/batman-adv/sysfs.c +++ b/net/batman-adv/sysfs.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/sysfs.h b/net/batman-adv/sysfs.h index 5e466093dfa5..d987f8b30a98 100644 --- a/net/batman-adv/sysfs.h +++ b/net/batman-adv/sysfs.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Marek Lindner */ diff --git a/net/batman-adv/tp_meter.c b/net/batman-adv/tp_meter.c index dd6a9a40dbb9..bd2ac570c42c 100644 --- a/net/batman-adv/tp_meter.c +++ b/net/batman-adv/tp_meter.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2012-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2012-2020 B.A.T.M.A.N. contributors: * * Edo Monticelli, Antonio Quartulli */ diff --git a/net/batman-adv/tp_meter.h b/net/batman-adv/tp_meter.h index 78d310da0ad3..140105215aa2 100644 --- a/net/batman-adv/tp_meter.h +++ b/net/batman-adv/tp_meter.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2012-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2012-2020 B.A.T.M.A.N. contributors: * * Edo Monticelli, Antonio Quartulli */ diff --git a/net/batman-adv/trace.c b/net/batman-adv/trace.c index 3cedd2c36528..3444d9e4e90d 100644 --- a/net/batman-adv/trace.c +++ b/net/batman-adv/trace.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Sven Eckelmann */ diff --git a/net/batman-adv/trace.h b/net/batman-adv/trace.h index d8f764521c0b..f631b1e01b89 100644 --- a/net/batman-adv/trace.h +++ b/net/batman-adv/trace.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2010-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2010-2020 B.A.T.M.A.N. contributors: * * Sven Eckelmann */ diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c index 8a482c5ec67b..852932838ddc 100644 --- a/net/batman-adv/translation-table.c +++ b/net/batman-adv/translation-table.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich, Antonio Quartulli */ diff --git a/net/batman-adv/translation-table.h b/net/batman-adv/translation-table.h index 4a98860d7f0e..b24d35b9226a 100644 --- a/net/batman-adv/translation-table.h +++ b/net/batman-adv/translation-table.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich, Antonio Quartulli */ diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c index aae63f0d21eb..0963a43ad996 100644 --- a/net/batman-adv/tvlv.c +++ b/net/batman-adv/tvlv.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/tvlv.h b/net/batman-adv/tvlv.h index 36985000a0a8..d509d00c7a23 100644 --- a/net/batman-adv/tvlv.h +++ b/net/batman-adv/tvlv.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h index bdf9827b5f63..4a17a66cc572 100644 --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2007-2019 B.A.T.M.A.N. contributors: +/* Copyright (C) 2007-2020 B.A.T.M.A.N. contributors: * * Marek Lindner, Simon Wunderlich */ -- cgit v1.2.3-71-gd317 From dea53bb80e07b9e1641b865493908c20cb8df2ac Mon Sep 17 00:00:00 2001 From: David Ahern Date: Mon, 30 Dec 2019 14:14:28 -0800 Subject: tcp: Add l3index to tcp_md5sig_key and md5 functions Add l3index to tcp_md5sig_key to represent the L3 domain of a key, and add l3index to tcp_md5_do_add and tcp_md5_do_del to fill in the key. With the key now based on an l3index, add the new parameter to the lookup functions and consider the l3index when looking for a match. The l3index comes from the skb when processing ingress packets leveraging the helpers created for socket lookups, tcp_v4_sdif and inet_iif (and the v6 variants). When the sdif index is set it means the packet ingressed a device that is part of an L3 domain and inet_iif points to the VRF device. For egress, the L3 domain is determined from the socket binding and sk_bound_dev_if. Signed-off-by: David Ahern Signed-off-by: David S. Miller --- include/net/tcp.h | 24 +++++++++--------- net/ipv4/tcp_ipv4.c | 68 +++++++++++++++++++++++++++++++++++--------------- net/ipv6/tcp_ipv6.c | 72 +++++++++++++++++++++++++++++++++++++++-------------- 3 files changed, 113 insertions(+), 51 deletions(-) (limited to 'include') diff --git a/include/net/tcp.h b/include/net/tcp.h index e460ea7f767b..7df37e2fddca 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1532,8 +1532,9 @@ struct tcp_md5sig_key { struct hlist_node node; u8 keylen; u8 family; /* AF_INET or AF_INET6 */ - union tcp_md5_addr addr; u8 prefixlen; + union tcp_md5_addr addr; + int l3index; /* set if key added with L3 scope */ u8 key[TCP_MD5SIG_MAXKEYLEN]; struct rcu_head rcu; }; @@ -1577,34 +1578,33 @@ struct tcp_md5sig_pool { int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb); int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen, const u8 *newkey, u8 newkeylen, - gfp_t gfp); + int family, u8 prefixlen, int l3index, + const u8 *newkey, u8 newkeylen, gfp_t gfp); int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen); + int family, u8 prefixlen, int l3index); struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, const struct sock *addr_sk); #ifdef CONFIG_TCP_MD5SIG #include extern struct static_key_false tcp_md5_needed; -struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, +struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family); static inline struct tcp_md5sig_key * -tcp_md5_do_lookup(const struct sock *sk, - const union tcp_md5_addr *addr, - int family) +tcp_md5_do_lookup(const struct sock *sk, int l3index, + const union tcp_md5_addr *addr, int family) { if (!static_branch_unlikely(&tcp_md5_needed)) return NULL; - return __tcp_md5_do_lookup(sk, addr, family); + return __tcp_md5_do_lookup(sk, l3index, addr, family); } #define tcp_twsk_md5_key(twsk) ((twsk)->tw_md5_key) #else -static inline struct tcp_md5sig_key *tcp_md5_do_lookup(const struct sock *sk, - const union tcp_md5_addr *addr, - int family) +static inline struct tcp_md5sig_key * +tcp_md5_do_lookup(const struct sock *sk, int l3index, + const union tcp_md5_addr *addr, int family) { return NULL; } diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 93f220793add..30b3f19d6301 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -702,13 +702,19 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb) hash_location = tcp_parse_md5sig_option(th); if (sk && sk_fullsock(sk)) { const union tcp_md5_addr *addr; + int l3index; + /* sdif set, means packet ingressed via a device + * in an L3 domain and inet_iif is set to it. + */ + l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0; addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr; - key = tcp_md5_do_lookup(sk, addr, AF_INET); + key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET); } else if (hash_location) { const union tcp_md5_addr *addr; int sdif = tcp_v4_sdif(skb); int dif = inet_iif(skb); + int l3index; /* * active side is lost. Try to find listening socket through @@ -725,8 +731,12 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb) if (!sk1) goto out; + /* sdif set, means packet ingressed via a device + * in an L3 domain and dif is set to it. + */ + l3index = sdif ? dif : 0; addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr; - key = tcp_md5_do_lookup(sk1, addr, AF_INET); + key = tcp_md5_do_lookup(sk1, l3index, addr, AF_INET); if (!key) goto out; @@ -911,6 +921,7 @@ static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, struct request_sock *req) { const union tcp_md5_addr *addr; + int l3index; /* sk->sk_state == TCP_LISTEN -> for regular TCP_SYN_RECV * sk->sk_state == TCP_SYN_RECV -> for Fast Open. @@ -924,13 +935,14 @@ static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, * Rcv.Wind.Shift bits: */ addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr; + l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0; tcp_v4_send_ack(sk, skb, seq, tcp_rsk(req)->rcv_nxt, req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale, tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, req->ts_recent, 0, - tcp_md5_do_lookup(sk, addr, AF_INET), + tcp_md5_do_lookup(sk, l3index, addr, AF_INET), inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0, ip_hdr(skb)->tos); } @@ -990,7 +1002,7 @@ DEFINE_STATIC_KEY_FALSE(tcp_md5_needed); EXPORT_SYMBOL(tcp_md5_needed); /* Find the Key structure for an address. */ -struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, +struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family) { @@ -1010,7 +1022,8 @@ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, hlist_for_each_entry_rcu(key, &md5sig->head, node) { if (key->family != family) continue; - + if (key->l3index && key->l3index != l3index) + continue; if (family == AF_INET) { mask = inet_make_mask(key->prefixlen); match = (key->addr.a4.s_addr & mask) == @@ -1034,7 +1047,8 @@ EXPORT_SYMBOL(__tcp_md5_do_lookup); static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen) + int family, u8 prefixlen, + int l3index) { const struct tcp_sock *tp = tcp_sk(sk); struct tcp_md5sig_key *key; @@ -1053,6 +1067,8 @@ static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk, hlist_for_each_entry_rcu(key, &md5sig->head, node) { if (key->family != family) continue; + if (key->l3index && key->l3index != l3index) + continue; if (!memcmp(&key->addr, addr, size) && key->prefixlen == prefixlen) return key; @@ -1064,23 +1080,26 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, const struct sock *addr_sk) { const union tcp_md5_addr *addr; + int l3index; + l3index = l3mdev_master_ifindex_by_index(sock_net(sk), + addr_sk->sk_bound_dev_if); addr = (const union tcp_md5_addr *)&addr_sk->sk_daddr; - return tcp_md5_do_lookup(sk, addr, AF_INET); + return tcp_md5_do_lookup(sk, l3index, addr, AF_INET); } EXPORT_SYMBOL(tcp_v4_md5_lookup); /* This can be called on a newly created socket, from other files */ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen, const u8 *newkey, u8 newkeylen, - gfp_t gfp) + int family, u8 prefixlen, int l3index, + const u8 *newkey, u8 newkeylen, gfp_t gfp) { /* Add Key to the list */ struct tcp_md5sig_key *key; struct tcp_sock *tp = tcp_sk(sk); struct tcp_md5sig_info *md5sig; - key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen); + key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index); if (key) { /* Pre-existing entry - just update that one. */ memcpy(key->key, newkey, newkeylen); @@ -1112,6 +1131,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, key->keylen = newkeylen; key->family = family; key->prefixlen = prefixlen; + key->l3index = l3index; memcpy(&key->addr, addr, (family == AF_INET6) ? sizeof(struct in6_addr) : sizeof(struct in_addr)); @@ -1121,11 +1141,11 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, EXPORT_SYMBOL(tcp_md5_do_add); int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, - u8 prefixlen) + u8 prefixlen, int l3index) { struct tcp_md5sig_key *key; - key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen); + key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index); if (!key) return -ENOENT; hlist_del_rcu(&key->node); @@ -1158,6 +1178,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, struct sockaddr_in *sin = (struct sockaddr_in *)&cmd.tcpm_addr; const union tcp_md5_addr *addr; u8 prefixlen = 32; + int l3index = 0; if (optlen < sizeof(cmd)) return -EINVAL; @@ -1178,12 +1199,12 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, addr = (union tcp_md5_addr *)&sin->sin_addr.s_addr; if (!cmd.tcpm_keylen) - return tcp_md5_do_del(sk, addr, AF_INET, prefixlen); + return tcp_md5_do_del(sk, addr, AF_INET, prefixlen, l3index); if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; - return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, + return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); } @@ -1311,11 +1332,16 @@ static bool tcp_v4_inbound_md5_hash(const struct sock *sk, const struct iphdr *iph = ip_hdr(skb); const struct tcphdr *th = tcp_hdr(skb); const union tcp_md5_addr *addr; - int genhash; unsigned char newhash[16]; + int genhash, l3index; + + /* sdif set, means packet ingressed via a device + * in an L3 domain and dif is set to the l3mdev + */ + l3index = sdif ? dif : 0; addr = (union tcp_md5_addr *)&iph->saddr; - hash_expected = tcp_md5_do_lookup(sk, addr, AF_INET); + hash_expected = tcp_md5_do_lookup(sk, l3index, addr, AF_INET); hash_location = tcp_parse_md5sig_option(th); /* We've parsed the options - do we have a hash? */ @@ -1341,11 +1367,11 @@ static bool tcp_v4_inbound_md5_hash(const struct sock *sk, if (genhash || memcmp(hash_location, newhash, 16) != 0) { NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE); - net_info_ratelimited("MD5 Hash failed for (%pI4, %d)->(%pI4, %d)%s\n", + net_info_ratelimited("MD5 Hash failed for (%pI4, %d)->(%pI4, %d)%s L3 index %d\n", &iph->saddr, ntohs(th->source), &iph->daddr, ntohs(th->dest), genhash ? " tcp_v4_calc_md5_hash failed" - : ""); + : "", l3index); return true; } return false; @@ -1431,6 +1457,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, #ifdef CONFIG_TCP_MD5SIG const union tcp_md5_addr *addr; struct tcp_md5sig_key *key; + int l3index; #endif struct ip_options_rcu *inet_opt; @@ -1478,9 +1505,10 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, tcp_initialize_rcv_mss(newsk); #ifdef CONFIG_TCP_MD5SIG + l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif); /* Copy over the MD5 key from the original socket */ addr = (union tcp_md5_addr *)&newinet->inet_daddr; - key = tcp_md5_do_lookup(sk, addr, AF_INET); + key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET); if (key) { /* * We're using one, so create a matching key @@ -1488,7 +1516,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, * memory, then we end up not copying the key * across. Shucks. */ - tcp_md5_do_add(newsk, addr, AF_INET, 32, + tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, key->key, key->keylen, GFP_ATOMIC); sk_nocaps_add(newsk, NETIF_F_GSO_MASK); } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index e94f23b61b62..71ad7d89be0f 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -81,7 +81,8 @@ static const struct tcp_sock_af_ops tcp_sock_ipv6_specific; static const struct tcp_sock_af_ops tcp_sock_ipv6_mapped_specific; #else static struct tcp_md5sig_key *tcp_v6_md5_do_lookup(const struct sock *sk, - const struct in6_addr *addr) + const struct in6_addr *addr, + int l3index) { return NULL; } @@ -532,15 +533,22 @@ static void tcp_v6_reqsk_destructor(struct request_sock *req) #ifdef CONFIG_TCP_MD5SIG static struct tcp_md5sig_key *tcp_v6_md5_do_lookup(const struct sock *sk, - const struct in6_addr *addr) + const struct in6_addr *addr, + int l3index) { - return tcp_md5_do_lookup(sk, (union tcp_md5_addr *)addr, AF_INET6); + return tcp_md5_do_lookup(sk, l3index, + (union tcp_md5_addr *)addr, AF_INET6); } static struct tcp_md5sig_key *tcp_v6_md5_lookup(const struct sock *sk, const struct sock *addr_sk) { - return tcp_v6_md5_do_lookup(sk, &addr_sk->sk_v6_daddr); + int l3index; + + l3index = l3mdev_master_ifindex_by_index(sock_net(sk), + addr_sk->sk_bound_dev_if); + return tcp_v6_md5_do_lookup(sk, &addr_sk->sk_v6_daddr, + l3index); } static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, @@ -548,6 +556,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, { struct tcp_md5sig cmd; struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd.tcpm_addr; + int l3index = 0; u8 prefixlen; if (optlen < sizeof(cmd)) @@ -572,9 +581,9 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (!cmd.tcpm_keylen) { if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], - AF_INET, prefixlen); + AF_INET, prefixlen, l3index); return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr, - AF_INET6, prefixlen); + AF_INET6, prefixlen, l3index); } if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) @@ -582,12 +591,13 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], - AF_INET, prefixlen, cmd.tcpm_key, - cmd.tcpm_keylen, GFP_KERNEL); + AF_INET, prefixlen, l3index, + cmd.tcpm_key, cmd.tcpm_keylen, + GFP_KERNEL); return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr, - AF_INET6, prefixlen, cmd.tcpm_key, - cmd.tcpm_keylen, GFP_KERNEL); + AF_INET6, prefixlen, l3index, + cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); } static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, @@ -706,10 +716,15 @@ static bool tcp_v6_inbound_md5_hash(const struct sock *sk, struct tcp_md5sig_key *hash_expected; const struct ipv6hdr *ip6h = ipv6_hdr(skb); const struct tcphdr *th = tcp_hdr(skb); - int genhash; + int genhash, l3index; u8 newhash[16]; - hash_expected = tcp_v6_md5_do_lookup(sk, &ip6h->saddr); + /* sdif set, means packet ingressed via a device + * in an L3 domain and dif is set to the l3mdev + */ + l3index = sdif ? dif : 0; + + hash_expected = tcp_v6_md5_do_lookup(sk, &ip6h->saddr, l3index); hash_location = tcp_parse_md5sig_option(th); /* We've parsed the options - do we have a hash? */ @@ -733,10 +748,10 @@ static bool tcp_v6_inbound_md5_hash(const struct sock *sk, if (genhash || memcmp(hash_location, newhash, 16) != 0) { NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE); - net_info_ratelimited("MD5 Hash %s for [%pI6c]:%u->[%pI6c]:%u\n", + net_info_ratelimited("MD5 Hash %s for [%pI6c]:%u->[%pI6c]:%u L3 index %d\n", genhash ? "failed" : "mismatch", &ip6h->saddr, ntohs(th->source), - &ip6h->daddr, ntohs(th->dest)); + &ip6h->daddr, ntohs(th->dest), l3index); return true; } #endif @@ -952,10 +967,17 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb) rcu_read_lock(); hash_location = tcp_parse_md5sig_option(th); if (sk && sk_fullsock(sk)) { - key = tcp_v6_md5_do_lookup(sk, &ipv6h->saddr); + int l3index; + + /* sdif set, means packet ingressed via a device + * in an L3 domain and inet_iif is set to it. + */ + l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0; + key = tcp_v6_md5_do_lookup(sk, &ipv6h->saddr, l3index); } else if (hash_location) { int dif = tcp_v6_iif_l3_slave(skb); int sdif = tcp_v6_sdif(skb); + int l3index; /* * active side is lost. Try to find listening socket through @@ -972,7 +994,12 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb) if (!sk1) goto out; - key = tcp_v6_md5_do_lookup(sk1, &ipv6h->saddr); + /* sdif set, means packet ingressed via a device + * in an L3 domain and dif is set to it. + */ + l3index = tcp_v6_sdif(skb) ? dif : 0; + + key = tcp_v6_md5_do_lookup(sk1, &ipv6h->saddr, l3index); if (!key) goto out; @@ -1042,6 +1069,10 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb) static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, struct request_sock *req) { + int l3index; + + l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0; + /* sk->sk_state == TCP_LISTEN -> for regular TCP_SYN_RECV * sk->sk_state == TCP_SYN_RECV -> for Fast Open. */ @@ -1056,7 +1087,7 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale, tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, req->ts_recent, sk->sk_bound_dev_if, - tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr), + tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index), 0, 0, sk->sk_priority); } @@ -1128,6 +1159,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * struct sock *newsk; #ifdef CONFIG_TCP_MD5SIG struct tcp_md5sig_key *key; + int l3index; #endif struct flowi6 fl6; @@ -1271,8 +1303,10 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * newinet->inet_rcv_saddr = LOOPBACK4_IPV6; #ifdef CONFIG_TCP_MD5SIG + l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif); + /* Copy over the MD5 key from the original socket */ - key = tcp_v6_md5_do_lookup(sk, &newsk->sk_v6_daddr); + key = tcp_v6_md5_do_lookup(sk, &newsk->sk_v6_daddr, l3index); if (key) { /* We're using one, so create a matching key * on the newsk structure. If we fail to get @@ -1280,7 +1314,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * * across. Shucks. */ tcp_md5_do_add(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, - AF_INET6, 128, key->key, key->keylen, + AF_INET6, 128, l3index, key->key, key->keylen, sk_gfp_mask(sk, GFP_ATOMIC)); } #endif -- cgit v1.2.3-71-gd317 From 6b102db50cdde3ba2f78631ed21222edf3a5fb51 Mon Sep 17 00:00:00 2001 From: David Ahern Date: Mon, 30 Dec 2019 14:14:29 -0800 Subject: net: Add device index to tcp_md5sig Add support for userspace to specify a device index to limit the scope of an entry via the TCP_MD5SIG_EXT setsockopt. The existing __tcpm_pad is renamed to tcpm_ifindex and the new field is only checked if the new TCP_MD5SIG_FLAG_IFINDEX is set in tcpm_flags. For now, the device index must point to an L3 master device (e.g., VRF). The API and error handling are setup to allow the constraint to be relaxed in the future to any device index. Signed-off-by: David Ahern Signed-off-by: David S. Miller --- include/uapi/linux/tcp.h | 5 +++-- net/ipv4/tcp_ipv4.c | 18 ++++++++++++++++++ net/ipv6/tcp_ipv6.c | 20 +++++++++++++++++++- 3 files changed, 40 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index 74af1f759cee..d87184e673ca 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -317,14 +317,15 @@ enum { #define TCP_MD5SIG_MAXKEYLEN 80 /* tcp_md5sig extension flags for TCP_MD5SIG_EXT */ -#define TCP_MD5SIG_FLAG_PREFIX 1 /* address prefix length */ +#define TCP_MD5SIG_FLAG_PREFIX 0x1 /* address prefix length */ +#define TCP_MD5SIG_FLAG_IFINDEX 0x2 /* ifindex set */ struct tcp_md5sig { struct __kernel_sockaddr_storage tcpm_addr; /* address associated */ __u8 tcpm_flags; /* extension flags */ __u8 tcpm_prefixlen; /* address prefix */ __u16 tcpm_keylen; /* key length */ - __u32 __tcpm_pad; /* zero */ + int tcpm_ifindex; /* device index for scope */ __u8 tcpm_key[TCP_MD5SIG_MAXKEYLEN]; /* key (binary) */ }; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 30b3f19d6301..4adac9c75343 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1196,6 +1196,24 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, return -EINVAL; } + if (optname == TCP_MD5SIG_EXT && + cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) { + struct net_device *dev; + + rcu_read_lock(); + dev = dev_get_by_index_rcu(sock_net(sk), cmd.tcpm_ifindex); + if (dev && netif_is_l3_master(dev)) + l3index = dev->ifindex; + + rcu_read_unlock(); + + /* ok to reference set/not set outside of rcu; + * right now device MUST be an L3 master + */ + if (!dev || !l3index) + return -EINVAL; + } + addr = (union tcp_md5_addr *)&sin->sin_addr.s_addr; if (!cmd.tcpm_keylen) diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 71ad7d89be0f..95e4e1e95db2 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -578,10 +578,28 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, prefixlen = ipv6_addr_v4mapped(&sin6->sin6_addr) ? 32 : 128; } + if (optname == TCP_MD5SIG_EXT && + cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) { + struct net_device *dev; + + rcu_read_lock(); + dev = dev_get_by_index_rcu(sock_net(sk), cmd.tcpm_ifindex); + if (dev && netif_is_l3_master(dev)) + l3index = dev->ifindex; + rcu_read_unlock(); + + /* ok to reference set/not set outside of rcu; + * right now device MUST be an L3 master + */ + if (!dev || !l3index) + return -EINVAL; + } + if (!cmd.tcpm_keylen) { if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], - AF_INET, prefixlen, l3index); + AF_INET, prefixlen, + l3index); return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr, AF_INET6, prefixlen, l3index); } -- cgit v1.2.3-71-gd317 From b39c78b2aa09cae05f3a48c11f67b3add0d604de Mon Sep 17 00:00:00 2001 From: Li RongQing Date: Fri, 3 Jan 2020 11:51:00 +0800 Subject: net: remove the check argument from __skb_gro_checksum_convert The argument is always ignored, so remove it. Signed-off-by: Li RongQing Reviewed-by: Simon Horman Signed-off-by: David S. Miller --- include/linux/netdevice.h | 6 +++--- net/ipv4/gre_offload.c | 2 +- net/ipv4/udp_offload.c | 2 +- net/ipv6/udp_offload.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 2fd19fb8826d..2741aa35bec6 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2826,16 +2826,16 @@ static inline bool __skb_gro_checksum_convert_check(struct sk_buff *skb) } static inline void __skb_gro_checksum_convert(struct sk_buff *skb, - __sum16 check, __wsum pseudo) + __wsum pseudo) { NAPI_GRO_CB(skb)->csum = ~pseudo; NAPI_GRO_CB(skb)->csum_valid = 1; } -#define skb_gro_checksum_try_convert(skb, proto, check, compute_pseudo) \ +#define skb_gro_checksum_try_convert(skb, proto, compute_pseudo) \ do { \ if (__skb_gro_checksum_convert_check(skb)) \ - __skb_gro_checksum_convert(skb, check, \ + __skb_gro_checksum_convert(skb, \ compute_pseudo(skb, proto)); \ } while (0) diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c index 4de7e962d3da..2e6d1b7a7bc9 100644 --- a/net/ipv4/gre_offload.c +++ b/net/ipv4/gre_offload.c @@ -174,7 +174,7 @@ static struct sk_buff *gre_gro_receive(struct list_head *head, if (skb_gro_checksum_simple_validate(skb)) goto out_unlock; - skb_gro_checksum_try_convert(skb, IPPROTO_GRE, 0, + skb_gro_checksum_try_convert(skb, IPPROTO_GRE, null_compute_pseudo); } diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index a3908e55ed89..b25e42100ceb 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -480,7 +480,7 @@ struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb) inet_gro_compute_pseudo)) goto flush; else if (uh->check) - skb_gro_checksum_try_convert(skb, IPPROTO_UDP, uh->check, + skb_gro_checksum_try_convert(skb, IPPROTO_UDP, inet_gro_compute_pseudo); skip: NAPI_GRO_CB(skb)->is_ipv6 = 0; diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 64b8f05d6735..f0d5fc27d0b5 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -127,7 +127,7 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) ip6_gro_compute_pseudo)) goto flush; else if (uh->check) - skb_gro_checksum_try_convert(skb, IPPROTO_UDP, uh->check, + skb_gro_checksum_try_convert(skb, IPPROTO_UDP, ip6_gro_compute_pseudo); skip: -- cgit v1.2.3-71-gd317 From 36278a5d4d354e5d5610aa728831db9e03cc3d8d Mon Sep 17 00:00:00 2001 From: Alain Michaud Date: Wed, 11 Dec 2019 01:54:43 +0000 Subject: Bluetooth: Adding a bt_dev_warn_ratelimited macro. The macro will be used to display rate limited warning messages in the log. Signed-off-by: Alain Michaud Signed-off-by: Marcel Holtmann --- include/net/bluetooth/bluetooth.h | 4 ++++ net/bluetooth/lib.c | 16 ++++++++++++++++ 2 files changed, 20 insertions(+) (limited to 'include') diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index fabee6db0abb..bd2675266859 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -129,6 +129,8 @@ void bt_warn(const char *fmt, ...); __printf(1, 2) void bt_err(const char *fmt, ...); __printf(1, 2) +void bt_warn_ratelimited(const char *fmt, ...); +__printf(1, 2) void bt_err_ratelimited(const char *fmt, ...); #define BT_INFO(fmt, ...) bt_info(fmt "\n", ##__VA_ARGS__) @@ -147,6 +149,8 @@ void bt_err_ratelimited(const char *fmt, ...); #define bt_dev_dbg(hdev, fmt, ...) \ BT_DBG("%s: " fmt, (hdev)->name, ##__VA_ARGS__) +#define bt_dev_warn_ratelimited(hdev, fmt, ...) \ + bt_warn_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__) #define bt_dev_err_ratelimited(hdev, fmt, ...) \ BT_ERR_RATELIMITED("%s: " fmt, (hdev)->name, ##__VA_ARGS__) diff --git a/net/bluetooth/lib.c b/net/bluetooth/lib.c index 63e65d9b4b24..c09e0a3a0ed9 100644 --- a/net/bluetooth/lib.c +++ b/net/bluetooth/lib.c @@ -183,6 +183,22 @@ void bt_err(const char *format, ...) } EXPORT_SYMBOL(bt_err); +void bt_warn_ratelimited(const char *format, ...) +{ + struct va_format vaf; + va_list args; + + va_start(args, format); + + vaf.fmt = format; + vaf.va = &args; + + pr_warn_ratelimited("%pV", &vaf); + + va_end(args); +} +EXPORT_SYMBOL(bt_warn_ratelimited); + void bt_err_ratelimited(const char *format, ...) { struct va_format vaf; -- cgit v1.2.3-71-gd317 From 657cc646475b721f5c5bab82e7fd43302c7c8358 Mon Sep 17 00:00:00 2001 From: Marcel Holtmann Date: Wed, 11 Dec 2019 11:34:36 +0100 Subject: Bluetooth: Remove usage of BT_ERR_RATELIMITED macro The macro is really not needed and can be replaced with either usage of bt_err_ratelimited or bt_dev_err_ratelimited. Signed-off-by: Marcel Holtmann Signed-off-by: Johan Hedberg --- include/net/bluetooth/bluetooth.h | 4 +--- net/bluetooth/hci_event.c | 14 ++++++-------- 2 files changed, 7 insertions(+), 11 deletions(-) (limited to 'include') diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index bd2675266859..e42bb8e03c09 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -138,8 +138,6 @@ void bt_err_ratelimited(const char *fmt, ...); #define BT_ERR(fmt, ...) bt_err(fmt "\n", ##__VA_ARGS__) #define BT_DBG(fmt, ...) pr_debug(fmt "\n", ##__VA_ARGS__) -#define BT_ERR_RATELIMITED(fmt, ...) bt_err_ratelimited(fmt "\n", ##__VA_ARGS__) - #define bt_dev_info(hdev, fmt, ...) \ BT_INFO("%s: " fmt, (hdev)->name, ##__VA_ARGS__) #define bt_dev_warn(hdev, fmt, ...) \ @@ -152,7 +150,7 @@ void bt_err_ratelimited(const char *fmt, ...); #define bt_dev_warn_ratelimited(hdev, fmt, ...) \ bt_warn_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__) #define bt_dev_err_ratelimited(hdev, fmt, ...) \ - BT_ERR_RATELIMITED("%s: " fmt, (hdev)->name, ##__VA_ARGS__) + bt_err_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__) /* Connection and socket states */ enum { diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index c1d3a303d97f..1941f120a376 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -5451,7 +5451,7 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb) hci_dev_unlock(hdev); } -static u8 ext_evt_type_to_legacy(u16 evt_type) +static u8 ext_evt_type_to_legacy(struct hci_dev *hdev, u16 evt_type) { if (evt_type & LE_EXT_ADV_LEGACY_PDU) { switch (evt_type) { @@ -5468,10 +5468,7 @@ static u8 ext_evt_type_to_legacy(u16 evt_type) return LE_ADV_SCAN_RSP; } - BT_ERR_RATELIMITED("Unknown advertising packet type: 0x%02x", - evt_type); - - return LE_ADV_INVALID; + goto invalid; } if (evt_type & LE_EXT_ADV_CONN_IND) { @@ -5491,8 +5488,9 @@ static u8 ext_evt_type_to_legacy(u16 evt_type) evt_type & LE_EXT_ADV_DIRECT_IND) return LE_ADV_NONCONN_IND; - BT_ERR_RATELIMITED("Unknown advertising packet type: 0x%02x", - evt_type); +invalid: + bt_dev_err_ratelimited(hdev, "Unknown advertising packet type: 0x%02x", + evt_type); return LE_ADV_INVALID; } @@ -5510,7 +5508,7 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, struct sk_buff *skb) u16 evt_type; evt_type = __le16_to_cpu(ev->evt_type); - legacy_evt_type = ext_evt_type_to_legacy(evt_type); + legacy_evt_type = ext_evt_type_to_legacy(hdev, evt_type); if (legacy_evt_type != LE_ADV_INVALID) { process_adv_report(hdev, legacy_evt_type, &ev->bdaddr, ev->bdaddr_type, NULL, 0, ev->rssi, -- cgit v1.2.3-71-gd317 From 1efd927d660e6ab02a9cd32fbbe3c7dc47980132 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Thu, 2 Jan 2020 15:00:55 -0800 Subject: Bluetooth: Add support for LE PHY Update Complete event This handles LE PHY Update Complete event and store both tx_phy and rx_phy into hci_conn. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann --- include/net/bluetooth/hci.h | 8 ++++++++ include/net/bluetooth/hci_core.h | 2 ++ net/bluetooth/hci_event.c | 27 +++++++++++++++++++++++++++ 3 files changed, 37 insertions(+) (limited to 'include') diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 5bc1e30dedde..07b6ecedc6ce 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -2186,6 +2186,14 @@ struct hci_ev_le_direct_adv_info { __s8 rssi; } __packed; +#define HCI_EV_LE_PHY_UPDATE_COMPLETE 0x0c +struct hci_ev_le_phy_update_complete { + __u8 status; + __u16 handle; + __u8 tx_phy; + __u8 rx_phy; +} __packed; + #define HCI_EV_LE_EXT_ADV_REPORT 0x0d struct hci_ev_le_ext_adv_report { __le16 evt_type; diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index b689aceb636b..faebe3859931 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -493,6 +493,8 @@ struct hci_conn { __u16 le_supv_timeout; __u8 le_adv_data[HCI_MAX_AD_LENGTH]; __u8 le_adv_data_len; + __u8 le_tx_phy; + __u8 le_rx_phy; __s8 rssi; __s8 tx_power; __s8 max_tx_power; diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 1941f120a376..6ddc4a74a5e4 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -5718,6 +5718,29 @@ static void hci_le_direct_adv_report_evt(struct hci_dev *hdev, hci_dev_unlock(hdev); } +static void hci_le_phy_update_evt(struct hci_dev *hdev, struct sk_buff *skb) +{ + struct hci_ev_le_phy_update_complete *ev = (void *) skb->data; + struct hci_conn *conn; + + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); + + if (!ev->status) + return; + + hci_dev_lock(hdev); + + conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); + if (!conn) + goto unlock; + + conn->le_tx_phy = ev->tx_phy; + conn->le_rx_phy = ev->rx_phy; + +unlock: + hci_dev_unlock(hdev); +} + static void hci_le_meta_evt(struct hci_dev *hdev, struct sk_buff *skb) { struct hci_ev_le_meta *le_ev = (void *) skb->data; @@ -5753,6 +5776,10 @@ static void hci_le_meta_evt(struct hci_dev *hdev, struct sk_buff *skb) hci_le_direct_adv_report_evt(hdev, skb); break; + case HCI_EV_LE_PHY_UPDATE_COMPLETE: + hci_le_phy_update_evt(hdev, skb); + break; + case HCI_EV_LE_EXT_ADV_REPORT: hci_le_ext_adv_report_evt(hdev, skb); break; -- cgit v1.2.3-71-gd317 From 14a65084f9310ba6a4017c365f9c9820b099dde5 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Fri, 3 Jan 2020 18:11:30 +0100 Subject: net: ethernet: sxgbe: Rename Samsung to lowercase Fix up inconsistent usage of upper and lowercase letters in "Samsung" name. "SAMSUNG" is not an abbreviation but a regular trademarked name. Therefore it should be written with lowercase letters starting with capital letter. Although advertisement materials usually use uppercase "SAMSUNG", the lowercase version is used in all legal aspects (e.g. on Wikipedia and in privacy/legal statements on https://www.samsung.com/semiconductor/privacy-global/). Signed-off-by: Krzysztof Kozlowski Signed-off-by: David S. Miller --- drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c | 2 +- include/linux/sxgbe_platform.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c index cd6e0de48248..7d3a1c0df09c 100644 --- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c +++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c @@ -2296,7 +2296,7 @@ __setup("sxgbeeth=", sxgbe_cmdline_opt); -MODULE_DESCRIPTION("SAMSUNG 10G/2.5G/1G Ethernet PLATFORM driver"); +MODULE_DESCRIPTION("Samsung 10G/2.5G/1G Ethernet PLATFORM driver"); MODULE_PARM_DESC(debug, "Message Level (-1: default, 0: no output, 16: all)"); MODULE_PARM_DESC(eee_timer, "EEE-LPI Default LS timer value"); diff --git a/include/linux/sxgbe_platform.h b/include/linux/sxgbe_platform.h index 85ec745767bd..966146f7267a 100644 --- a/include/linux/sxgbe_platform.h +++ b/include/linux/sxgbe_platform.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * 10G controller driver for Samsung EXYNOS SoCs + * 10G controller driver for Samsung Exynos SoCs * * Copyright (C) 2013 Samsung Electronics Co., Ltd. * http://www.samsung.com -- cgit v1.2.3-71-gd317 From c114574ebfdf42f826776f717c8056a00fa94881 Mon Sep 17 00:00:00 2001 From: Russell King Date: Fri, 3 Jan 2020 20:43:17 +0000 Subject: net: phy: add PHY_INTERFACE_MODE_10GBASER Recent discussion has revealed that the use of PHY_INTERFACE_MODE_10GKR is incorrect. Add a 10GBASE-R definition, document both the -R and -KR versions, and the fact that 10GKR was used incorrectly. Reviewed-by: Andrew Lunn Signed-off-by: Russell King Signed-off-by: David S. Miller --- Documentation/networking/phy.rst | 18 ++++++++++++++++++ include/linux/phy.h | 12 ++++++++---- 2 files changed, 26 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/Documentation/networking/phy.rst b/Documentation/networking/phy.rst index e0a7c7af6525..1e4735cc0553 100644 --- a/Documentation/networking/phy.rst +++ b/Documentation/networking/phy.rst @@ -267,6 +267,24 @@ Some of the interface modes are described below: duplex, pause or other settings. This is dependent on the MAC and/or PHY behaviour. +``PHY_INTERFACE_MODE_10GBASER`` + This is the IEEE 802.3 Clause 49 defined 10GBASE-R protocol used with + various different mediums. Please refer to the IEEE standard for a + definition of this. + + Note: 10GBASE-R is just one protocol that can be used with XFI and SFI. + XFI and SFI permit multiple protocols over a single SERDES lane, and + also defines the electrical characteristics of the signals with a host + compliance board plugged into the host XFP/SFP connector. Therefore, + XFI and SFI are not PHY interface types in their own right. + +``PHY_INTERFACE_MODE_10GKR`` + This is the IEEE 802.3 Clause 49 defined 10GBASE-R with Clause 73 + autonegotiation. Please refer to the IEEE standard for further + information. + + Note: due to legacy usage, some 10GBASE-R usage incorrectly makes + use of this definition. Pause frames / flow control =========================== diff --git a/include/linux/phy.h b/include/linux/phy.h index 30e599c454db..5932bb8e9c35 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -100,9 +100,11 @@ typedef enum { PHY_INTERFACE_MODE_2500BASEX, PHY_INTERFACE_MODE_RXAUI, PHY_INTERFACE_MODE_XAUI, - /* 10GBASE-KR, XFI, SFI - single lane 10G Serdes */ - PHY_INTERFACE_MODE_10GKR, + /* 10GBASE-R, XFI, SFI - single lane 10G Serdes */ + PHY_INTERFACE_MODE_10GBASER, PHY_INTERFACE_MODE_USXGMII, + /* 10GBASE-KR - with Clause 73 AN */ + PHY_INTERFACE_MODE_10GKR, PHY_INTERFACE_MODE_MAX, } phy_interface_t; @@ -176,10 +178,12 @@ static inline const char *phy_modes(phy_interface_t interface) return "rxaui"; case PHY_INTERFACE_MODE_XAUI: return "xaui"; - case PHY_INTERFACE_MODE_10GKR: - return "10gbase-kr"; + case PHY_INTERFACE_MODE_10GBASER: + return "10gbase-r"; case PHY_INTERFACE_MODE_USXGMII: return "usxgmii"; + case PHY_INTERFACE_MODE_10GKR: + return "10gbase-kr"; default: return "unknown"; } -- cgit v1.2.3-71-gd317 From 0a51826c6e05c5b6cc423b376b81c311e9e485b0 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Sat, 4 Jan 2020 02:37:09 +0200 Subject: net: dsa: sja1105: Always send through management routes in slot 0 I finally found out how the 4 management route slots are supposed to be used, but.. it's not worth it. The description from the comment I've just deleted in this commit is still true: when more than 1 management slot is active at the same time, the switch will match frames incoming [from the CPU port] on the lowest numbered management slot that matches the frame's DMAC. My issue was that one was not supposed to statically assign each port a slot. Yes, there are 4 slots and also 4 non-CPU ports, but that is a mere coincidence. Instead, the switch can be used like this: every management frame gets a slot at the right of the most recently assigned slot: Send mgmt frame 1 through S0: S0 x x x Send mgmt frame 2 through S1: S0 S1 x x Send mgmt frame 3 through S2: S0 S1 S2 x Send mgmt frame 4 through S3: S0 S1 S2 S3 The difference compared to the old usage is that the transmission of frames 1-4 doesn't need to wait until the completion of the management route. It is safe to use a slot to the right of the most recently used one, because by protocol nobody will program a slot to your left and "steal" your route towards the correct egress port. So there is a potential throughput benefit here. But mgmt frame 5 has no more free slot to use, so it has to wait until _all_ of S0, S1, S2, S3 are full, in order to use S0 again. And that's actually exactly the problem: I was looking for something that would bring more predictable transmission latency, but this is exactly the opposite: 3 out of 4 frames would be transmitted quicker, but the 4th would draw the short straw and have a worse worst-case latency than before. Useless. Things are made even worse by PTP TX timestamping, which is something I won't go deeply into here. Suffice to say that the fact there is a driver-level lock on the SPI bus offsets any potential throughput gains that parallelism might bring. So there's no going back to the multi-slot scheme, remove the "mgmt_slot" variable from sja1105_port and the dummy static assignment made at probe time. While passing by, also remove the assignment to casc_port altogether. Don't pretend that we support cascaded setups. Signed-off-by: Vladimir Oltean Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/dsa/sja1105/sja1105_main.c | 26 +------------------------- include/linux/dsa/sja1105.h | 1 - 2 files changed, 1 insertion(+), 26 deletions(-) (limited to 'include') diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 1da5ac111499..79dd965227bc 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -426,14 +426,6 @@ static int sja1105_init_general_params(struct sja1105_private *priv) .tpid2 = ETH_P_SJA1105, }; struct sja1105_table *table; - int i, k = 0; - - for (i = 0; i < SJA1105_NUM_PORTS; i++) { - if (dsa_is_dsa_port(priv->ds, i)) - default_general_params.casc_port = i; - else if (dsa_is_user_port(priv->ds, i)) - priv->ports[i].mgmt_slot = k++; - } table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS]; @@ -1827,30 +1819,14 @@ static netdev_tx_t sja1105_port_deferred_xmit(struct dsa_switch *ds, int port, struct sk_buff *skb) { struct sja1105_private *priv = ds->priv; - struct sja1105_port *sp = &priv->ports[port]; - int slot = sp->mgmt_slot; struct sk_buff *clone; - /* The tragic fact about the switch having 4x2 slots for installing - * management routes is that all of them except one are actually - * useless. - * If 2 slots are simultaneously configured for two BPDUs sent to the - * same (multicast) DMAC but on different egress ports, the switch - * would confuse them and redirect first frame it receives on the CPU - * port towards the port configured on the numerically first slot - * (therefore wrong port), then second received frame on second slot - * (also wrong port). - * So for all practical purposes, there needs to be a lock that - * prevents that from happening. The slot used here is utterly useless - * (could have simply been 0 just as fine), but we are doing it - * nonetheless, in case a smarter idea ever comes up in the future. - */ mutex_lock(&priv->mgmt_lock); /* The clone, if there, was made by dsa_skb_tx_timestamp */ clone = DSA_SKB_CB(skb)->clone; - sja1105_mgmt_xmit(ds, port, slot, skb, !!clone); + sja1105_mgmt_xmit(ds, port, 0, skb, !!clone); if (!clone) goto out; diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h index c0b6a603ea8c..317e05b2584b 100644 --- a/include/linux/dsa/sja1105.h +++ b/include/linux/dsa/sja1105.h @@ -56,7 +56,6 @@ struct sja1105_port { struct sja1105_tagger_data *data; struct dsa_port *dp; bool hwts_tx_en; - int mgmt_slot; }; #endif /* _NET_DSA_SJA1105_H */ -- cgit v1.2.3-71-gd317 From a68578c20a9667463ee3000402b21644ea62d753 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Sat, 4 Jan 2020 02:37:10 +0200 Subject: net: dsa: Make deferred_xmit private to sja1105 There are 3 things that are wrong with the DSA deferred xmit mechanism: 1. Its introduction has made the DSA hotpath ever so slightly more inefficient for everybody, since DSA_SKB_CB(skb)->deferred_xmit needs to be initialized to false for every transmitted frame, in order to figure out whether the driver requested deferral or not (a very rare occasion, rare even for the only driver that does use this mechanism: sja1105). That was necessary to avoid kfree_skb from freeing the skb. 2. Because L2 PTP is a link-local protocol like STP, it requires management routes and deferred xmit with this switch. But as opposed to STP, the deferred work mechanism needs to schedule the packet rather quickly for the TX timstamp to be collected in time and sent to user space. But there is no provision for controlling the scheduling priority of this deferred xmit workqueue. Too bad this is a rather specific requirement for a feature that nobody else uses (more below). 3. Perhaps most importantly, it makes the DSA core adhere a bit too much to the NXP company-wide policy "Innovate Where It Doesn't Matter". The sja1105 is probably the only DSA switch that requires some frames sent from the CPU to be routed to the slave port via an out-of-band configuration (register write) rather than in-band (DSA tag). And there are indeed very good reasons to not want to do that: if that out-of-band register is at the other end of a slow bus such as SPI, then you limit that Ethernet flow's throughput to effectively the throughput of the SPI bus. So hardware vendors should definitely not be encouraged to design this way. We do _not_ want more widespread use of this mechanism. Luckily we have a solution for each of the 3 issues: For 1, we can just remove that variable in the skb->cb and counteract the effect of kfree_skb with skb_get, much to the same effect. The advantage, of course, being that anybody who doesn't use deferred xmit doesn't need to do any extra operation in the hotpath. For 2, we can create a kernel thread for each port's deferred xmit work. If the user switch ports are named swp0, swp1, swp2, the kernel threads will be named swp0_xmit, swp1_xmit, swp2_xmit (there appears to be a 15 character length limit on kernel thread names). With this, the user can change the scheduling priority with chrt $(pidof swp2_xmit). For 3, we can actually move the entire implementation to the sja1105 driver. So this patch deletes the generic implementation from the DSA core and adds a new one, more adequate to the requirements of PTP TX timestamping, in sja1105_main.c. Suggested-by: Florian Fainelli Signed-off-by: Vladimir Oltean Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/dsa/sja1105/sja1105_main.c | 96 ++++++++++++++++++++++++++-------- include/linux/dsa/sja1105.h | 3 ++ include/net/dsa.h | 9 ---- net/dsa/dsa_priv.h | 2 - net/dsa/slave.c | 37 +------------ net/dsa/tag_sja1105.c | 15 +++++- 6 files changed, 93 insertions(+), 69 deletions(-) (limited to 'include') diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 79dd965227bc..61795833c8f5 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -1732,6 +1732,16 @@ static int sja1105_setup(struct dsa_switch *ds) static void sja1105_teardown(struct dsa_switch *ds) { struct sja1105_private *priv = ds->priv; + int port; + + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + struct sja1105_port *sp = &priv->ports[port]; + + if (!dsa_is_user_port(ds, port)) + continue; + + kthread_destroy_worker(sp->xmit_worker); + } sja1105_tas_teardown(ds); sja1105_ptp_clock_unregister(ds); @@ -1753,6 +1763,18 @@ static int sja1105_port_enable(struct dsa_switch *ds, int port, return 0; } +static void sja1105_port_disable(struct dsa_switch *ds, int port) +{ + struct sja1105_private *priv = ds->priv; + struct sja1105_port *sp = &priv->ports[port]; + + if (!dsa_is_user_port(ds, port)) + return; + + kthread_cancel_work_sync(&sp->xmit_work); + skb_queue_purge(&sp->xmit_queue); +} + static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot, struct sk_buff *skb, bool takets) { @@ -1811,31 +1833,36 @@ static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot, return NETDEV_TX_OK; } +#define work_to_port(work) \ + container_of((work), struct sja1105_port, xmit_work) +#define tagger_to_sja1105(t) \ + container_of((t), struct sja1105_private, tagger_data) + /* Deferred work is unfortunately necessary because setting up the management * route cannot be done from atomit context (SPI transfer takes a sleepable * lock on the bus) */ -static netdev_tx_t sja1105_port_deferred_xmit(struct dsa_switch *ds, int port, - struct sk_buff *skb) +static void sja1105_port_deferred_xmit(struct kthread_work *work) { - struct sja1105_private *priv = ds->priv; - struct sk_buff *clone; - - mutex_lock(&priv->mgmt_lock); + struct sja1105_port *sp = work_to_port(work); + struct sja1105_tagger_data *tagger_data = sp->data; + struct sja1105_private *priv = tagger_to_sja1105(tagger_data); + int port = sp - priv->ports; + struct sk_buff *skb; - /* The clone, if there, was made by dsa_skb_tx_timestamp */ - clone = DSA_SKB_CB(skb)->clone; + while ((skb = skb_dequeue(&sp->xmit_queue)) != NULL) { + struct sk_buff *clone = DSA_SKB_CB(skb)->clone; - sja1105_mgmt_xmit(ds, port, 0, skb, !!clone); + mutex_lock(&priv->mgmt_lock); - if (!clone) - goto out; + sja1105_mgmt_xmit(priv->ds, port, 0, skb, !!clone); - sja1105_ptp_txtstamp_skb(ds, port, clone); + /* The clone, if there, was made by dsa_skb_tx_timestamp */ + if (clone) + sja1105_ptp_txtstamp_skb(priv->ds, port, clone); -out: - mutex_unlock(&priv->mgmt_lock); - return NETDEV_TX_OK; + mutex_unlock(&priv->mgmt_lock); + } } /* The MAXAGE setting belongs to the L2 Forwarding Parameters table, @@ -1966,6 +1993,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = { .get_sset_count = sja1105_get_sset_count, .get_ts_info = sja1105_get_ts_info, .port_enable = sja1105_port_enable, + .port_disable = sja1105_port_disable, .port_fdb_dump = sja1105_fdb_dump, .port_fdb_add = sja1105_fdb_add, .port_fdb_del = sja1105_fdb_del, @@ -1979,7 +2007,6 @@ static const struct dsa_switch_ops sja1105_switch_ops = { .port_mdb_prepare = sja1105_mdb_prepare, .port_mdb_add = sja1105_mdb_add, .port_mdb_del = sja1105_mdb_del, - .port_deferred_xmit = sja1105_port_deferred_xmit, .port_hwtstamp_get = sja1105_hwtstamp_get, .port_hwtstamp_set = sja1105_hwtstamp_set, .port_rxtstamp = sja1105_port_rxtstamp, @@ -2031,7 +2058,7 @@ static int sja1105_probe(struct spi_device *spi) struct device *dev = &spi->dev; struct sja1105_private *priv; struct dsa_switch *ds; - int rc, i; + int rc, port; if (!dev->of_node) { dev_err(dev, "No DTS bindings for SJA1105 driver\n"); @@ -2096,15 +2123,42 @@ static int sja1105_probe(struct spi_device *spi) return rc; /* Connections between dsa_port and sja1105_port */ - for (i = 0; i < SJA1105_NUM_PORTS; i++) { - struct sja1105_port *sp = &priv->ports[i]; + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + struct sja1105_port *sp = &priv->ports[port]; + struct dsa_port *dp = dsa_to_port(ds, port); + struct net_device *slave; + + if (!dsa_is_user_port(ds, port)) + continue; - dsa_to_port(ds, i)->priv = sp; - sp->dp = dsa_to_port(ds, i); + dp->priv = sp; + sp->dp = dp; sp->data = tagger_data; + slave = dp->slave; + kthread_init_work(&sp->xmit_work, sja1105_port_deferred_xmit); + sp->xmit_worker = kthread_create_worker(0, "%s_xmit", + slave->name); + if (IS_ERR(sp->xmit_worker)) { + rc = PTR_ERR(sp->xmit_worker); + dev_err(ds->dev, + "failed to create deferred xmit thread: %d\n", + rc); + goto out; + } + skb_queue_head_init(&sp->xmit_queue); } return 0; +out: + while (port-- > 0) { + struct sja1105_port *sp = &priv->ports[port]; + + if (!dsa_is_user_port(ds, port)) + continue; + + kthread_destroy_worker(sp->xmit_worker); + } + return rc; } static int sja1105_remove(struct spi_device *spi) diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h index 317e05b2584b..fa5735c353cd 100644 --- a/include/linux/dsa/sja1105.h +++ b/include/linux/dsa/sja1105.h @@ -53,6 +53,9 @@ struct sja1105_skb_cb { ((struct sja1105_skb_cb *)DSA_SKB_CB_PRIV(skb)) struct sja1105_port { + struct kthread_worker *xmit_worker; + struct kthread_work xmit_work; + struct sk_buff_head xmit_queue; struct sja1105_tagger_data *data; struct dsa_port *dp; bool hwts_tx_en; diff --git a/include/net/dsa.h b/include/net/dsa.h index da5578db228e..23b1c58656d4 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -90,7 +90,6 @@ struct dsa_device_ops { struct dsa_skb_cb { struct sk_buff *clone; - bool deferred_xmit; }; struct __dsa_skb_cb { @@ -192,9 +191,6 @@ struct dsa_port { struct phylink *pl; struct phylink_config pl_config; - struct work_struct xmit_work; - struct sk_buff_head xmit_queue; - struct list_head list; /* @@ -564,11 +560,6 @@ struct dsa_switch_ops { bool (*port_rxtstamp)(struct dsa_switch *ds, int port, struct sk_buff *skb, unsigned int type); - /* - * Deferred frame Tx - */ - netdev_tx_t (*port_deferred_xmit)(struct dsa_switch *ds, int port, - struct sk_buff *skb); /* Devlink parameters */ int (*devlink_param_get)(struct dsa_switch *ds, u32 id, struct devlink_param_gset_ctx *ctx); diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 09ea2fd78c74..8a162605b861 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -162,8 +162,6 @@ int dsa_slave_resume(struct net_device *slave_dev); int dsa_slave_register_notifier(void); void dsa_slave_unregister_notifier(void); -void *dsa_defer_xmit(struct sk_buff *skb, struct net_device *dev); - static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev) { struct dsa_slave_priv *p = netdev_priv(dev); diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 78ffc87dc25e..c1828bdc79dc 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -116,9 +116,6 @@ static int dsa_slave_close(struct net_device *dev) struct net_device *master = dsa_slave_to_master(dev); struct dsa_port *dp = dsa_slave_to_port(dev); - cancel_work_sync(&dp->xmit_work); - skb_queue_purge(&dp->xmit_queue); - phylink_stop(dp->pl); dsa_port_disable(dp); @@ -518,7 +515,6 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev) s->tx_bytes += skb->len; u64_stats_update_end(&s->syncp); - DSA_SKB_CB(skb)->deferred_xmit = false; DSA_SKB_CB(skb)->clone = NULL; /* Identify PTP protocol packets, clone them, and pass them to the @@ -531,39 +527,13 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev) */ nskb = p->xmit(skb, dev); if (!nskb) { - if (!DSA_SKB_CB(skb)->deferred_xmit) - kfree_skb(skb); + kfree_skb(skb); return NETDEV_TX_OK; } return dsa_enqueue_skb(nskb, dev); } -void *dsa_defer_xmit(struct sk_buff *skb, struct net_device *dev) -{ - struct dsa_port *dp = dsa_slave_to_port(dev); - - DSA_SKB_CB(skb)->deferred_xmit = true; - - skb_queue_tail(&dp->xmit_queue, skb); - schedule_work(&dp->xmit_work); - return NULL; -} -EXPORT_SYMBOL_GPL(dsa_defer_xmit); - -static void dsa_port_xmit_work(struct work_struct *work) -{ - struct dsa_port *dp = container_of(work, struct dsa_port, xmit_work); - struct dsa_switch *ds = dp->ds; - struct sk_buff *skb; - - if (unlikely(!ds->ops->port_deferred_xmit)) - return; - - while ((skb = skb_dequeue(&dp->xmit_queue)) != NULL) - ds->ops->port_deferred_xmit(ds, dp->index, skb); -} - /* ethtool operations *******************************************************/ static void dsa_slave_get_drvinfo(struct net_device *dev, @@ -1367,9 +1337,6 @@ int dsa_slave_suspend(struct net_device *slave_dev) if (!netif_running(slave_dev)) return 0; - cancel_work_sync(&dp->xmit_work); - skb_queue_purge(&dp->xmit_queue); - netif_device_detach(slave_dev); rtnl_lock(); @@ -1455,8 +1422,6 @@ int dsa_slave_create(struct dsa_port *port) } p->dp = port; INIT_LIST_HEAD(&p->mall_tc_list); - INIT_WORK(&port->xmit_work, dsa_port_xmit_work); - skb_queue_head_init(&port->xmit_queue); p->xmit = cpu_dp->tag_ops->xmit; port->slave = slave_dev; diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c index 63ef2a14c934..7c2b84393cc6 100644 --- a/net/dsa/tag_sja1105.c +++ b/net/dsa/tag_sja1105.c @@ -83,6 +83,19 @@ static bool sja1105_filter(const struct sk_buff *skb, struct net_device *dev) return false; } +/* Calls sja1105_port_deferred_xmit in sja1105_main.c */ +static struct sk_buff *sja1105_defer_xmit(struct sja1105_port *sp, + struct sk_buff *skb) +{ + /* Increase refcount so the kfree_skb in dsa_slave_xmit + * won't really free the packet. + */ + skb_queue_tail(&sp->xmit_queue, skb_get(skb)); + kthread_queue_work(sp->xmit_worker, &sp->xmit_work); + + return NULL; +} + static struct sk_buff *sja1105_xmit(struct sk_buff *skb, struct net_device *netdev) { @@ -97,7 +110,7 @@ static struct sk_buff *sja1105_xmit(struct sk_buff *skb, * is the .port_deferred_xmit driver callback. */ if (unlikely(sja1105_is_link_local(skb))) - return dsa_defer_xmit(skb, netdev); + return sja1105_defer_xmit(dp->priv, skb); /* If we are under a vlan_filtering bridge, IP termination on * switch ports based on 802.1Q tags is simply too brittle to -- cgit v1.2.3-71-gd317 From 6c930994503d9b5bd34b1329427dd7d3d6d37cd4 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Mon, 6 Jan 2020 03:34:09 +0200 Subject: mii: Add helpers for parsing SGMII auto-negotiation Typically a MAC PCS auto-configures itself after it receives the negotiated copper-side link settings from the PHY, but some MAC devices are more special and need manual interpretation of the SGMII AN result. In other cases, the PCS exposes the entire tx_config_reg base page as it is transmitted on the wire during auto-negotiation, so it makes sense to be able to decode the equivalent lp_advertised bit mask from the raw u16 (of course, "lp" considering the PCS to be the local PHY). Therefore, add the bit definitions for the SGMII registers 4 and 5 (local device ability, link partner ability), as well as a link_mode conversion helper that can be used to feed the AN results into phy_resolve_aneg_linkmode. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- include/linux/mii.h | 50 ++++++++++++++++++++++++++++++++++++++++++++++++ include/uapi/linux/mii.h | 12 ++++++++++++ 2 files changed, 62 insertions(+) (limited to 'include') diff --git a/include/linux/mii.h b/include/linux/mii.h index 4ce8901a1af6..18c6208f56fc 100644 --- a/include/linux/mii.h +++ b/include/linux/mii.h @@ -372,6 +372,56 @@ static inline u32 mii_lpa_to_ethtool_lpa_x(u32 lpa) return result | mii_adv_to_ethtool_adv_x(lpa); } +/** + * mii_lpa_mod_linkmode_adv_sgmii + * @lp_advertising: pointer to destination link mode. + * @lpa: value of the MII_LPA register + * + * A small helper function that translates MII_LPA bits to + * linkmode advertisement settings for SGMII. + * Leaves other bits unchanged. + */ +static inline void +mii_lpa_mod_linkmode_lpa_sgmii(unsigned long *lp_advertising, u32 lpa) +{ + u32 speed_duplex = lpa & LPA_SGMII_DPX_SPD_MASK; + + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT, lp_advertising, + speed_duplex == LPA_SGMII_1000HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT, lp_advertising, + speed_duplex == LPA_SGMII_1000FULL); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, lp_advertising, + speed_duplex == LPA_SGMII_100HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT, lp_advertising, + speed_duplex == LPA_SGMII_100FULL); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, lp_advertising, + speed_duplex == LPA_SGMII_10HALF); + + linkmode_mod_bit(ETHTOOL_LINK_MODE_10baseT_Full_BIT, lp_advertising, + speed_duplex == LPA_SGMII_10FULL); +} + +/** + * mii_lpa_to_linkmode_adv_sgmii + * @advertising: pointer to destination link mode. + * @lpa: value of the MII_LPA register + * + * A small helper function that translates MII_ADVERTISE bits + * to linkmode advertisement settings when in SGMII mode. + * Clears the old value of advertising. + */ +static inline void mii_lpa_to_linkmode_lpa_sgmii(unsigned long *lp_advertising, + u32 lpa) +{ + linkmode_zero(lp_advertising); + + mii_lpa_mod_linkmode_lpa_sgmii(lp_advertising, lpa); +} + /** * mii_adv_mod_linkmode_adv_t * @advertising:pointer to destination link mode. diff --git a/include/uapi/linux/mii.h b/include/uapi/linux/mii.h index 51b48e4be1f2..0b9c3beda345 100644 --- a/include/uapi/linux/mii.h +++ b/include/uapi/linux/mii.h @@ -131,6 +131,18 @@ #define NWAYTEST_LOOPBACK 0x0100 /* Enable loopback for N-way */ #define NWAYTEST_RESV2 0xfe00 /* Unused... */ +/* MAC and PHY tx_config_Reg[15:0] for SGMII in-band auto-negotiation.*/ +#define ADVERTISE_SGMII 0x0001 /* MAC can do SGMII */ +#define LPA_SGMII 0x0001 /* PHY can do SGMII */ +#define LPA_SGMII_DPX_SPD_MASK 0x1C00 /* SGMII duplex and speed bits */ +#define LPA_SGMII_10HALF 0x0000 /* Can do 10mbps half-duplex */ +#define LPA_SGMII_10FULL 0x1000 /* Can do 10mbps full-duplex */ +#define LPA_SGMII_100HALF 0x0400 /* Can do 100mbps half-duplex */ +#define LPA_SGMII_100FULL 0x1400 /* Can do 100mbps full-duplex */ +#define LPA_SGMII_1000HALF 0x0800 /* Can do 1000mbps half-duplex */ +#define LPA_SGMII_1000FULL 0x1800 /* Can do 1000mbps full-duplex */ +#define LPA_SGMII_LINK 0x8000 /* PHY link with copper-side partner */ + /* 1000BASE-T Control register */ #define ADVERTISE_1000FULL 0x0200 /* Advertise 1000BASE-T full duplex */ #define ADVERTISE_1000HALF 0x0100 /* Advertise 1000BASE-T half duplex */ -- cgit v1.2.3-71-gd317 From 1511ed0a0167f523a84b4e727372a5d2ce1b6c2f Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Mon, 6 Jan 2020 03:34:11 +0200 Subject: net: phylink: add support for polling MAC PCS Some MAC PCS blocks are unable to provide interrupts when their status changes. As we already have support in phylink for polling status, use this to provide a hook for MACs to enable polling mode. The patch idea was picked up from Russell King's suggestion on the macb phylink patch thread here [0] but the implementation was changed. Instead of introducing a new phylink_start_poll() function, which would make the implementation cumbersome for common PHYLINK implementations for multiple types of devices, like DSA, just add a boolean property to the phylink_config structure, which is just as backwards-compatible. https://lkml.org/lkml/2019/12/16/603 Suggested-by: Russell King Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- Documentation/networking/sfp-phylink.rst | 3 ++- drivers/net/phy/phylink.c | 3 ++- include/linux/phylink.h | 2 ++ 3 files changed, 6 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/Documentation/networking/sfp-phylink.rst b/Documentation/networking/sfp-phylink.rst index a5e00a159d21..d753a309f9d1 100644 --- a/Documentation/networking/sfp-phylink.rst +++ b/Documentation/networking/sfp-phylink.rst @@ -251,7 +251,8 @@ this documentation. phylink_mac_change(priv->phylink, link_is_up); where ``link_is_up`` is true if the link is currently up or false - otherwise. + otherwise. If a MAC is unable to provide these interrupts, then + it should set ``priv->phylink_config.pcs_poll = true;`` in step 9. 11. Verify that the driver does not call:: diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 88686e0f9ae1..af914a8842bd 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -1022,7 +1022,8 @@ void phylink_start(struct phylink *pl) if (irq <= 0) mod_timer(&pl->link_poll, jiffies + HZ); } - if (pl->cfg_link_an_mode == MLO_AN_FIXED && pl->get_fixed_state) + if ((pl->cfg_link_an_mode == MLO_AN_FIXED && pl->get_fixed_state) || + pl->config->pcs_poll) mod_timer(&pl->link_poll, jiffies + HZ); if (pl->phydev) phy_start(pl->phydev); diff --git a/include/linux/phylink.h b/include/linux/phylink.h index fed5488e3c75..523209e70947 100644 --- a/include/linux/phylink.h +++ b/include/linux/phylink.h @@ -63,10 +63,12 @@ enum phylink_op_type { * struct phylink_config - PHYLINK configuration structure * @dev: a pointer to a struct device associated with the MAC * @type: operation type of PHYLINK instance + * @pcs_poll: MAC PCS cannot provide link change interrupt */ struct phylink_config { struct device *dev; enum phylink_op_type type; + bool pcs_poll; }; /** -- cgit v1.2.3-71-gd317 From 787cac3f5a650fd3184a41c5a27a2fe9ded833aa Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Mon, 6 Jan 2020 03:34:12 +0200 Subject: net: dsa: Pass pcs_poll flag from driver to PHYLINK The DSA drivers that implement .phylink_mac_link_state should normally register an interrupt for the PCS, from which they should call phylink_mac_change(). However not all switches implement this, and those who don't should set this flag in dsa_switch in the .setup callback, so that PHYLINK will poll for a few ms until the in-band AN link timer expires and the PCS state settles. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- include/net/dsa.h | 5 +++++ net/dsa/port.c | 1 + 2 files changed, 6 insertions(+) (limited to 'include') diff --git a/include/net/dsa.h b/include/net/dsa.h index 23b1c58656d4..0c39fed8cd99 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -279,6 +279,11 @@ struct dsa_switch { */ bool vlan_filtering; + /* MAC PCS does not provide link state change interrupt, and requires + * polling. Flag passed on to PHYLINK. + */ + bool pcs_poll; + size_t num_ports; }; diff --git a/net/dsa/port.c b/net/dsa/port.c index ffb5601f7ed6..774facb8d547 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -599,6 +599,7 @@ static int dsa_port_phylink_register(struct dsa_port *dp) dp->pl_config.dev = ds->dev; dp->pl_config.type = PHYLINK_DEV; + dp->pl_config.pcs_poll = ds->pcs_poll; dp->pl = phylink_create(&dp->pl_config, of_fwnode_handle(port_dn), mode, &dsa_port_phylink_mac_ops); -- cgit v1.2.3-71-gd317 From 6517798dd3432a0002109809bf74e4fcf9bb0c7d Mon Sep 17 00:00:00 2001 From: Claudiu Manoil Date: Mon, 6 Jan 2020 03:34:13 +0200 Subject: enetc: Make MDIO accessors more generic and export to include/linux/fsl Within the LS1028A SoC, the register map for the ENETC MDIO controller is instantiated a few times: for the central (external) MDIO controller, for the internal bus of each standalone ENETC port, and for the internal bus of the Felix switch. Refactoring is needed to support multiple MDIO buses from multiple drivers. The enetc_hw structure is made an opaque type and a smaller enetc_mdio_priv is created. 'mdio_base' - MDIO registers base address - is being parameterized, to be able to work with different MDIO register bases. The ENETC MDIO bus operations are exported from the fsl-enetc-mdio kernel object, the same that registers the central MDIO controller (the dedicated PF). The ENETC main driver has been changed to select it, and use its exported helpers to further register its private MDIO bus. The DSA Felix driver will do the same. Signed-off-by: Claudiu Manoil Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- drivers/net/ethernet/freescale/enetc/Kconfig | 1 + drivers/net/ethernet/freescale/enetc/Makefile | 2 +- drivers/net/ethernet/freescale/enetc/enetc_hw.h | 1 + drivers/net/ethernet/freescale/enetc/enetc_mdio.c | 110 +++++++++------------ drivers/net/ethernet/freescale/enetc/enetc_mdio.h | 12 --- .../net/ethernet/freescale/enetc/enetc_pci_mdio.c | 43 ++++---- drivers/net/ethernet/freescale/enetc/enetc_pf.c | 47 +++++++++ drivers/net/ethernet/freescale/enetc/enetc_pf.h | 4 - include/linux/fsl/enetc_mdio.h | 55 +++++++++++ 9 files changed, 176 insertions(+), 99 deletions(-) delete mode 100644 drivers/net/ethernet/freescale/enetc/enetc_mdio.h create mode 100644 include/linux/fsl/enetc_mdio.h (limited to 'include') diff --git a/drivers/net/ethernet/freescale/enetc/Kconfig b/drivers/net/ethernet/freescale/enetc/Kconfig index edad4ca46327..fe942de19597 100644 --- a/drivers/net/ethernet/freescale/enetc/Kconfig +++ b/drivers/net/ethernet/freescale/enetc/Kconfig @@ -2,6 +2,7 @@ config FSL_ENETC tristate "ENETC PF driver" depends on PCI && PCI_MSI && (ARCH_LAYERSCAPE || COMPILE_TEST) + select FSL_ENETC_MDIO select PHYLIB help This driver supports NXP ENETC gigabit ethernet controller PCIe diff --git a/drivers/net/ethernet/freescale/enetc/Makefile b/drivers/net/ethernet/freescale/enetc/Makefile index d0db33e5b6b7..74f7ac253b8b 100644 --- a/drivers/net/ethernet/freescale/enetc/Makefile +++ b/drivers/net/ethernet/freescale/enetc/Makefile @@ -3,7 +3,7 @@ common-objs := enetc.o enetc_cbdr.o enetc_ethtool.o obj-$(CONFIG_FSL_ENETC) += fsl-enetc.o -fsl-enetc-y := enetc_pf.o enetc_mdio.o $(common-objs) +fsl-enetc-y := enetc_pf.o $(common-objs) fsl-enetc-$(CONFIG_PCI_IOV) += enetc_msg.o fsl-enetc-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h index 8375cd886dba..62554f28ce07 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h +++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h @@ -200,6 +200,7 @@ enum enetc_bdr_type {TX, RX}; #define ENETC_PFPMR 0x1900 #define ENETC_PFPMR_PMACE BIT(1) #define ENETC_PFPMR_MWLM BIT(0) +#define ENETC_EMDIO_BASE 0x1c00 #define ENETC_PSIUMHFR0(n, err) (((err) ? 0x1d08 : 0x1d00) + (n) * 0x10) #define ENETC_PSIUMHFR1(n) (0x1d04 + (n) * 0x10) #define ENETC_PSIMMHFR0(n, err) (((err) ? 0x1d00 : 0x1d08) + (n) * 0x10) diff --git a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c index 149883c8f0b8..18c68e048d43 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_mdio.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_mdio.c @@ -1,24 +1,35 @@ // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) /* Copyright 2019 NXP */ +#include #include #include #include #include -#include "enetc_mdio.h" +#include "enetc_pf.h" -#define ENETC_MDIO_REG_OFFSET 0x1c00 #define ENETC_MDIO_CFG 0x0 /* MDIO configuration and status */ #define ENETC_MDIO_CTL 0x4 /* MDIO control */ #define ENETC_MDIO_DATA 0x8 /* MDIO data */ #define ENETC_MDIO_ADDR 0xc /* MDIO address */ -#define enetc_mdio_rd(hw, off) \ - enetc_port_rd(hw, ENETC_##off + ENETC_MDIO_REG_OFFSET) -#define enetc_mdio_wr(hw, off, val) \ - enetc_port_wr(hw, ENETC_##off + ENETC_MDIO_REG_OFFSET, val) -#define enetc_mdio_rd_reg(off) enetc_mdio_rd(hw, off) +static inline u32 _enetc_mdio_rd(struct enetc_mdio_priv *mdio_priv, int off) +{ + return enetc_port_rd(mdio_priv->hw, mdio_priv->mdio_base + off); +} + +static inline void _enetc_mdio_wr(struct enetc_mdio_priv *mdio_priv, int off, + u32 val) +{ + enetc_port_wr(mdio_priv->hw, mdio_priv->mdio_base + off, val); +} + +#define enetc_mdio_rd(mdio_priv, off) \ + _enetc_mdio_rd(mdio_priv, ENETC_##off) +#define enetc_mdio_wr(mdio_priv, off, val) \ + _enetc_mdio_wr(mdio_priv, ENETC_##off, val) +#define enetc_mdio_rd_reg(off) enetc_mdio_rd(mdio_priv, off) #define ENETC_MDC_DIV 258 @@ -35,7 +46,7 @@ #define MDIO_DATA(x) ((x) & 0xffff) #define TIMEOUT 1000 -static int enetc_mdio_wait_complete(struct enetc_hw *hw) +static int enetc_mdio_wait_complete(struct enetc_mdio_priv *mdio_priv) { u32 val; @@ -46,7 +57,6 @@ static int enetc_mdio_wait_complete(struct enetc_hw *hw) int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) { struct enetc_mdio_priv *mdio_priv = bus->priv; - struct enetc_hw *hw = mdio_priv->hw; u32 mdio_ctl, mdio_cfg; u16 dev_addr; int ret; @@ -61,39 +71,39 @@ int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value) mdio_cfg &= ~MDIO_CFG_ENC45; } - enetc_mdio_wr(hw, MDIO_CFG, mdio_cfg); + enetc_mdio_wr(mdio_priv, MDIO_CFG, mdio_cfg); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; /* set port and dev addr */ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); - enetc_mdio_wr(hw, MDIO_CTL, mdio_ctl); + enetc_mdio_wr(mdio_priv, MDIO_CTL, mdio_ctl); /* set the register address */ if (regnum & MII_ADDR_C45) { - enetc_mdio_wr(hw, MDIO_ADDR, regnum & 0xffff); + enetc_mdio_wr(mdio_priv, MDIO_ADDR, regnum & 0xffff); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; } /* write the value */ - enetc_mdio_wr(hw, MDIO_DATA, MDIO_DATA(value)); + enetc_mdio_wr(mdio_priv, MDIO_DATA, MDIO_DATA(value)); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; return 0; } +EXPORT_SYMBOL_GPL(enetc_mdio_write); int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) { struct enetc_mdio_priv *mdio_priv = bus->priv; - struct enetc_hw *hw = mdio_priv->hw; u32 mdio_ctl, mdio_cfg; u16 dev_addr, value; int ret; @@ -107,86 +117,56 @@ int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) mdio_cfg &= ~MDIO_CFG_ENC45; } - enetc_mdio_wr(hw, MDIO_CFG, mdio_cfg); + enetc_mdio_wr(mdio_priv, MDIO_CFG, mdio_cfg); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; /* set port and device addr */ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr); - enetc_mdio_wr(hw, MDIO_CTL, mdio_ctl); + enetc_mdio_wr(mdio_priv, MDIO_CTL, mdio_ctl); /* set the register address */ if (regnum & MII_ADDR_C45) { - enetc_mdio_wr(hw, MDIO_ADDR, regnum & 0xffff); + enetc_mdio_wr(mdio_priv, MDIO_ADDR, regnum & 0xffff); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; } /* initiate the read */ - enetc_mdio_wr(hw, MDIO_CTL, mdio_ctl | MDIO_CTL_READ); + enetc_mdio_wr(mdio_priv, MDIO_CTL, mdio_ctl | MDIO_CTL_READ); - ret = enetc_mdio_wait_complete(hw); + ret = enetc_mdio_wait_complete(mdio_priv); if (ret) return ret; /* return all Fs if nothing was there */ - if (enetc_mdio_rd(hw, MDIO_CFG) & MDIO_CFG_RD_ER) { + if (enetc_mdio_rd(mdio_priv, MDIO_CFG) & MDIO_CFG_RD_ER) { dev_dbg(&bus->dev, "Error while reading PHY%d reg at %d.%hhu\n", phy_id, dev_addr, regnum); return 0xffff; } - value = enetc_mdio_rd(hw, MDIO_DATA) & 0xffff; + value = enetc_mdio_rd(mdio_priv, MDIO_DATA) & 0xffff; return value; } +EXPORT_SYMBOL_GPL(enetc_mdio_read); -int enetc_mdio_probe(struct enetc_pf *pf) +struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs) { - struct device *dev = &pf->si->pdev->dev; - struct enetc_mdio_priv *mdio_priv; - struct device_node *np; - struct mii_bus *bus; - int err; - - bus = devm_mdiobus_alloc_size(dev, sizeof(*mdio_priv)); - if (!bus) - return -ENOMEM; - - bus->name = "Freescale ENETC MDIO Bus"; - bus->read = enetc_mdio_read; - bus->write = enetc_mdio_write; - bus->parent = dev; - mdio_priv = bus->priv; - mdio_priv->hw = &pf->si->hw; - snprintf(bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev)); - - np = of_get_child_by_name(dev->of_node, "mdio"); - if (!np) { - dev_err(dev, "MDIO node missing\n"); - return -EINVAL; - } - - err = of_mdiobus_register(bus, np); - if (err) { - of_node_put(np); - dev_err(dev, "cannot register MDIO bus\n"); - return err; - } + struct enetc_hw *hw; - of_node_put(np); - pf->mdio = bus; + hw = devm_kzalloc(dev, sizeof(*hw), GFP_KERNEL); + if (!hw) + return ERR_PTR(-ENOMEM); - return 0; -} + hw->port = port_regs; -void enetc_mdio_remove(struct enetc_pf *pf) -{ - if (pf->mdio) - mdiobus_unregister(pf->mdio); + return hw; } +EXPORT_SYMBOL_GPL(enetc_hw_alloc); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_mdio.h b/drivers/net/ethernet/freescale/enetc/enetc_mdio.h deleted file mode 100644 index 60c9a3889824..000000000000 --- a/drivers/net/ethernet/freescale/enetc/enetc_mdio.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */ -/* Copyright 2019 NXP */ - -#include -#include "enetc_pf.h" - -struct enetc_mdio_priv { - struct enetc_hw *hw; -}; - -int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value); -int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c index fbd41ce01f06..87c0e969da40 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c @@ -1,7 +1,8 @@ // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) /* Copyright 2019 NXP */ +#include #include -#include "enetc_mdio.h" +#include "enetc_pf.h" #define ENETC_MDIO_DEV_ID 0xee01 #define ENETC_MDIO_DEV_NAME "FSL PCIe IE Central MDIO" @@ -13,17 +14,29 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev, { struct enetc_mdio_priv *mdio_priv; struct device *dev = &pdev->dev; + void __iomem *port_regs; struct enetc_hw *hw; struct mii_bus *bus; int err; - hw = devm_kzalloc(dev, sizeof(*hw), GFP_KERNEL); - if (!hw) - return -ENOMEM; + port_regs = pci_iomap(pdev, 0, 0); + if (!port_regs) { + dev_err(dev, "iomap failed\n"); + err = -ENXIO; + goto err_ioremap; + } + + hw = enetc_hw_alloc(dev, port_regs); + if (IS_ERR(enetc_hw_alloc)) { + err = PTR_ERR(hw); + goto err_hw_alloc; + } bus = devm_mdiobus_alloc_size(dev, sizeof(*mdio_priv)); - if (!bus) - return -ENOMEM; + if (!bus) { + err = -ENOMEM; + goto err_mdiobus_alloc; + } bus->name = ENETC_MDIO_BUS_NAME; bus->read = enetc_mdio_read; @@ -31,13 +44,14 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev, bus->parent = dev; mdio_priv = bus->priv; mdio_priv->hw = hw; + mdio_priv->mdio_base = ENETC_EMDIO_BASE; snprintf(bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev)); pcie_flr(pdev); err = pci_enable_device_mem(pdev); if (err) { dev_err(dev, "device enable failed\n"); - return err; + goto err_pci_enable; } err = pci_request_region(pdev, 0, KBUILD_MODNAME); @@ -46,13 +60,6 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev, goto err_pci_mem_reg; } - hw->port = pci_iomap(pdev, 0, 0); - if (!hw->port) { - err = -ENXIO; - dev_err(dev, "iomap failed\n"); - goto err_ioremap; - } - err = of_mdiobus_register(bus, dev->of_node); if (err) goto err_mdiobus_reg; @@ -62,12 +69,14 @@ static int enetc_pci_mdio_probe(struct pci_dev *pdev, return 0; err_mdiobus_reg: - iounmap(mdio_priv->hw->port); -err_ioremap: pci_release_mem_regions(pdev); err_pci_mem_reg: pci_disable_device(pdev); - +err_pci_enable: +err_mdiobus_alloc: + iounmap(port_regs); +err_hw_alloc: +err_ioremap: return err; } diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c index e7482d483b28..fc0d7d99e9a1 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c @@ -2,6 +2,7 @@ /* Copyright 2017-2019 NXP */ #include +#include #include #include #include "enetc_pf.h" @@ -749,6 +750,52 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev, enetc_get_primary_mac_addr(&si->hw, ndev->dev_addr); } +static int enetc_mdio_probe(struct enetc_pf *pf) +{ + struct device *dev = &pf->si->pdev->dev; + struct enetc_mdio_priv *mdio_priv; + struct device_node *np; + struct mii_bus *bus; + int err; + + bus = devm_mdiobus_alloc_size(dev, sizeof(*mdio_priv)); + if (!bus) + return -ENOMEM; + + bus->name = "Freescale ENETC MDIO Bus"; + bus->read = enetc_mdio_read; + bus->write = enetc_mdio_write; + bus->parent = dev; + mdio_priv = bus->priv; + mdio_priv->hw = &pf->si->hw; + mdio_priv->mdio_base = ENETC_EMDIO_BASE; + snprintf(bus->id, MII_BUS_ID_SIZE, "%s", dev_name(dev)); + + np = of_get_child_by_name(dev->of_node, "mdio"); + if (!np) { + dev_err(dev, "MDIO node missing\n"); + return -EINVAL; + } + + err = of_mdiobus_register(bus, np); + if (err) { + of_node_put(np); + dev_err(dev, "cannot register MDIO bus\n"); + return err; + } + + of_node_put(np); + pf->mdio = bus; + + return 0; +} + +static void enetc_mdio_remove(struct enetc_pf *pf) +{ + if (pf->mdio) + mdiobus_unregister(pf->mdio); +} + static int enetc_of_get_phy(struct enetc_ndev_priv *priv) { struct enetc_pf *pf = enetc_si_priv(priv->si); diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.h b/drivers/net/ethernet/freescale/enetc/enetc_pf.h index 10dd1b53bb08..59e65a6f6c3e 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pf.h +++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.h @@ -49,7 +49,3 @@ struct enetc_pf { int enetc_msg_psi_init(struct enetc_pf *pf); void enetc_msg_psi_free(struct enetc_pf *pf); void enetc_msg_handle_rxmsg(struct enetc_pf *pf, int mbox_id, u16 *status); - -/* MDIO */ -int enetc_mdio_probe(struct enetc_pf *pf); -void enetc_mdio_remove(struct enetc_pf *pf); diff --git a/include/linux/fsl/enetc_mdio.h b/include/linux/fsl/enetc_mdio.h new file mode 100644 index 000000000000..4875dd38af7e --- /dev/null +++ b/include/linux/fsl/enetc_mdio.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */ +/* Copyright 2019 NXP */ + +#ifndef _FSL_ENETC_MDIO_H_ +#define _FSL_ENETC_MDIO_H_ + +#include + +/* PCS registers */ +#define ENETC_PCS_LINK_TIMER1 0x12 +#define ENETC_PCS_LINK_TIMER1_VAL 0x06a0 +#define ENETC_PCS_LINK_TIMER2 0x13 +#define ENETC_PCS_LINK_TIMER2_VAL 0x0003 +#define ENETC_PCS_IF_MODE 0x14 +#define ENETC_PCS_IF_MODE_SGMII_EN BIT(0) +#define ENETC_PCS_IF_MODE_USE_SGMII_AN BIT(1) +#define ENETC_PCS_IF_MODE_SGMII_SPEED(x) (((x) << 2) & GENMASK(3, 2)) + +/* Not a mistake, the SerDes PLL needs to be set at 3.125 GHz by Reset + * Configuration Word (RCW, outside Linux control) for 2.5G SGMII mode. The PCS + * still thinks it's at gigabit. + */ +enum enetc_pcs_speed { + ENETC_PCS_SPEED_10 = 0, + ENETC_PCS_SPEED_100 = 1, + ENETC_PCS_SPEED_1000 = 2, + ENETC_PCS_SPEED_2500 = 2, +}; + +struct enetc_hw; + +struct enetc_mdio_priv { + struct enetc_hw *hw; + int mdio_base; +}; + +#if IS_REACHABLE(CONFIG_FSL_ENETC_MDIO) + +int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum); +int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value); +struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs); + +#else + +static inline int enetc_mdio_read(struct mii_bus *bus, int phy_id, int regnum) +{ return -EINVAL; } +static inline int enetc_mdio_write(struct mii_bus *bus, int phy_id, int regnum, + u16 value) +{ return -EINVAL; } +struct enetc_hw *enetc_hw_alloc(struct device *dev, void __iomem *port_regs) +{ return ERR_PTR(-EINVAL); } + +#endif + +#endif -- cgit v1.2.3-71-gd317 From ee50d07c9fc8155b5a3c6c29eae1459a12cf2fb4 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Mon, 6 Jan 2020 03:34:15 +0200 Subject: net: mscc: ocelot: make phy_mode a member of the common struct ocelot_port The Ocelot switchdev driver and the Felix DSA one need it for different reasons. Felix (or at least the VSC9959 instantiation in NXP LS1028A) is integrated with the traditional NXP Layerscape PCS design which does not support runtime configuration of SerDes protocol. So it needs to pre-validate the phy-mode from the device tree and prevent PHYLINK from attempting to change it. For this, it needs to cache it in a private variable. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- drivers/net/ethernet/mscc/ocelot.c | 7 ++++--- drivers/net/ethernet/mscc/ocelot.h | 1 - drivers/net/ethernet/mscc/ocelot_board.c | 4 ++-- include/soc/mscc/ocelot.h | 2 ++ 4 files changed, 8 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index 985b46d7e3d1..86d543ab1ab9 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -500,13 +500,14 @@ EXPORT_SYMBOL(ocelot_port_enable); static int ocelot_port_open(struct net_device *dev) { struct ocelot_port_private *priv = netdev_priv(dev); - struct ocelot *ocelot = priv->port.ocelot; + struct ocelot_port *ocelot_port = &priv->port; + struct ocelot *ocelot = ocelot_port->ocelot; int port = priv->chip_port; int err; if (priv->serdes) { err = phy_set_mode_ext(priv->serdes, PHY_MODE_ETHERNET, - priv->phy_mode); + ocelot_port->phy_mode); if (err) { netdev_err(dev, "Could not set mode of SerDes\n"); return err; @@ -514,7 +515,7 @@ static int ocelot_port_open(struct net_device *dev) } err = phy_connect_direct(dev, priv->phy, &ocelot_port_adjust_link, - priv->phy_mode); + ocelot_port->phy_mode); if (err) { netdev_err(dev, "Could not attach to PHY\n"); return err; diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h index c259114c48fd..7b77d44ed7cf 100644 --- a/drivers/net/ethernet/mscc/ocelot.h +++ b/drivers/net/ethernet/mscc/ocelot.h @@ -68,7 +68,6 @@ struct ocelot_port_private { u8 vlan_aware; - phy_interface_t phy_mode; struct phy *serdes; struct ocelot_port_tc tc; diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c index 2da8eee27e98..b38820849faa 100644 --- a/drivers/net/ethernet/mscc/ocelot_board.c +++ b/drivers/net/ethernet/mscc/ocelot_board.c @@ -402,9 +402,9 @@ static int mscc_ocelot_probe(struct platform_device *pdev) of_get_phy_mode(portnp, &phy_mode); - priv->phy_mode = phy_mode; + ocelot_port->phy_mode = phy_mode; - switch (priv->phy_mode) { + switch (ocelot_port->phy_mode) { case PHY_INTERFACE_MODE_NA: continue; case PHY_INTERFACE_MODE_SGMII: diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h index 64cbbbe74a36..068f96b1a83e 100644 --- a/include/soc/mscc/ocelot.h +++ b/include/soc/mscc/ocelot.h @@ -420,6 +420,8 @@ struct ocelot_port { u8 ptp_cmd; struct sk_buff_head tx_skbs; u8 ts_id; + + phy_interface_t phy_mode; }; struct ocelot { -- cgit v1.2.3-71-gd317 From 964ee5c82b770c2d8a5ccefeee3384c1061ce3ae Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Mon, 6 Jan 2020 03:34:16 +0200 Subject: net: mscc: ocelot: export ANA, DEV and QSYS registers to include/soc/mscc Since the Felix DSA driver is implementing its own PHYLINK instance due to SoC differences, it needs access to the few registers that are common, mainly for flow control. Signed-off-by: Vladimir Oltean Signed-off-by: David S. Miller --- drivers/net/ethernet/mscc/ocelot.h | 6 +- drivers/net/ethernet/mscc/ocelot_ana.h | 625 -------------------------------- drivers/net/ethernet/mscc/ocelot_dev.h | 275 -------------- drivers/net/ethernet/mscc/ocelot_qsys.h | 270 -------------- include/soc/mscc/ocelot_ana.h | 625 ++++++++++++++++++++++++++++++++ include/soc/mscc/ocelot_dev.h | 275 ++++++++++++++ include/soc/mscc/ocelot_qsys.h | 270 ++++++++++++++ 7 files changed, 1173 insertions(+), 1173 deletions(-) delete mode 100644 drivers/net/ethernet/mscc/ocelot_ana.h delete mode 100644 drivers/net/ethernet/mscc/ocelot_dev.h delete mode 100644 drivers/net/ethernet/mscc/ocelot_qsys.h create mode 100644 include/soc/mscc/ocelot_ana.h create mode 100644 include/soc/mscc/ocelot_dev.h create mode 100644 include/soc/mscc/ocelot_qsys.h (limited to 'include') diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h index 7b77d44ed7cf..04372ba72fec 100644 --- a/drivers/net/ethernet/mscc/ocelot.h +++ b/drivers/net/ethernet/mscc/ocelot.h @@ -18,11 +18,11 @@ #include #include +#include #include +#include +#include #include -#include "ocelot_ana.h" -#include "ocelot_dev.h" -#include "ocelot_qsys.h" #include "ocelot_rew.h" #include "ocelot_qs.h" #include "ocelot_tc.h" diff --git a/drivers/net/ethernet/mscc/ocelot_ana.h b/drivers/net/ethernet/mscc/ocelot_ana.h deleted file mode 100644 index 841c6ec22b64..000000000000 --- a/drivers/net/ethernet/mscc/ocelot_ana.h +++ /dev/null @@ -1,625 +0,0 @@ -/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ -/* - * Microsemi Ocelot Switch driver - * - * Copyright (c) 2017 Microsemi Corporation - */ - -#ifndef _MSCC_OCELOT_ANA_H_ -#define _MSCC_OCELOT_ANA_H_ - -#define ANA_ANAGEFIL_B_DOM_EN BIT(22) -#define ANA_ANAGEFIL_B_DOM_VAL BIT(21) -#define ANA_ANAGEFIL_AGE_LOCKED BIT(20) -#define ANA_ANAGEFIL_PID_EN BIT(19) -#define ANA_ANAGEFIL_PID_VAL(x) (((x) << 14) & GENMASK(18, 14)) -#define ANA_ANAGEFIL_PID_VAL_M GENMASK(18, 14) -#define ANA_ANAGEFIL_PID_VAL_X(x) (((x) & GENMASK(18, 14)) >> 14) -#define ANA_ANAGEFIL_VID_EN BIT(13) -#define ANA_ANAGEFIL_VID_VAL(x) ((x) & GENMASK(12, 0)) -#define ANA_ANAGEFIL_VID_VAL_M GENMASK(12, 0) - -#define ANA_STORMLIMIT_CFG_RSZ 0x4 - -#define ANA_STORMLIMIT_CFG_STORM_RATE(x) (((x) << 3) & GENMASK(6, 3)) -#define ANA_STORMLIMIT_CFG_STORM_RATE_M GENMASK(6, 3) -#define ANA_STORMLIMIT_CFG_STORM_RATE_X(x) (((x) & GENMASK(6, 3)) >> 3) -#define ANA_STORMLIMIT_CFG_STORM_UNIT BIT(2) -#define ANA_STORMLIMIT_CFG_STORM_MODE(x) ((x) & GENMASK(1, 0)) -#define ANA_STORMLIMIT_CFG_STORM_MODE_M GENMASK(1, 0) - -#define ANA_AUTOAGE_AGE_FAST BIT(21) -#define ANA_AUTOAGE_AGE_PERIOD(x) (((x) << 1) & GENMASK(20, 1)) -#define ANA_AUTOAGE_AGE_PERIOD_M GENMASK(20, 1) -#define ANA_AUTOAGE_AGE_PERIOD_X(x) (((x) & GENMASK(20, 1)) >> 1) -#define ANA_AUTOAGE_AUTOAGE_LOCKED BIT(0) - -#define ANA_MACTOPTIONS_REDUCED_TABLE BIT(1) -#define ANA_MACTOPTIONS_SHADOW BIT(0) - -#define ANA_AGENCTRL_FID_MASK(x) (((x) << 12) & GENMASK(23, 12)) -#define ANA_AGENCTRL_FID_MASK_M GENMASK(23, 12) -#define ANA_AGENCTRL_FID_MASK_X(x) (((x) & GENMASK(23, 12)) >> 12) -#define ANA_AGENCTRL_IGNORE_DMAC_FLAGS BIT(11) -#define ANA_AGENCTRL_IGNORE_SMAC_FLAGS BIT(10) -#define ANA_AGENCTRL_FLOOD_SPECIAL BIT(9) -#define ANA_AGENCTRL_FLOOD_IGNORE_VLAN BIT(8) -#define ANA_AGENCTRL_MIRROR_CPU BIT(7) -#define ANA_AGENCTRL_LEARN_CPU_COPY BIT(6) -#define ANA_AGENCTRL_LEARN_FWD_KILL BIT(5) -#define ANA_AGENCTRL_LEARN_IGNORE_VLAN BIT(4) -#define ANA_AGENCTRL_CPU_CPU_KILL_ENA BIT(3) -#define ANA_AGENCTRL_GREEN_COUNT_MODE BIT(2) -#define ANA_AGENCTRL_YELLOW_COUNT_MODE BIT(1) -#define ANA_AGENCTRL_RED_COUNT_MODE BIT(0) - -#define ANA_FLOODING_RSZ 0x4 - -#define ANA_FLOODING_FLD_UNICAST(x) (((x) << 12) & GENMASK(17, 12)) -#define ANA_FLOODING_FLD_UNICAST_M GENMASK(17, 12) -#define ANA_FLOODING_FLD_UNICAST_X(x) (((x) & GENMASK(17, 12)) >> 12) -#define ANA_FLOODING_FLD_BROADCAST(x) (((x) << 6) & GENMASK(11, 6)) -#define ANA_FLOODING_FLD_BROADCAST_M GENMASK(11, 6) -#define ANA_FLOODING_FLD_BROADCAST_X(x) (((x) & GENMASK(11, 6)) >> 6) -#define ANA_FLOODING_FLD_MULTICAST(x) ((x) & GENMASK(5, 0)) -#define ANA_FLOODING_FLD_MULTICAST_M GENMASK(5, 0) - -#define ANA_FLOODING_IPMC_FLD_MC4_CTRL(x) (((x) << 18) & GENMASK(23, 18)) -#define ANA_FLOODING_IPMC_FLD_MC4_CTRL_M GENMASK(23, 18) -#define ANA_FLOODING_IPMC_FLD_MC4_CTRL_X(x) (((x) & GENMASK(23, 18)) >> 18) -#define ANA_FLOODING_IPMC_FLD_MC4_DATA(x) (((x) << 12) & GENMASK(17, 12)) -#define ANA_FLOODING_IPMC_FLD_MC4_DATA_M GENMASK(17, 12) -#define ANA_FLOODING_IPMC_FLD_MC4_DATA_X(x) (((x) & GENMASK(17, 12)) >> 12) -#define ANA_FLOODING_IPMC_FLD_MC6_CTRL(x) (((x) << 6) & GENMASK(11, 6)) -#define ANA_FLOODING_IPMC_FLD_MC6_CTRL_M GENMASK(11, 6) -#define ANA_FLOODING_IPMC_FLD_MC6_CTRL_X(x) (((x) & GENMASK(11, 6)) >> 6) -#define ANA_FLOODING_IPMC_FLD_MC6_DATA(x) ((x) & GENMASK(5, 0)) -#define ANA_FLOODING_IPMC_FLD_MC6_DATA_M GENMASK(5, 0) - -#define ANA_SFLOW_CFG_RSZ 0x4 - -#define ANA_SFLOW_CFG_SF_RATE(x) (((x) << 2) & GENMASK(13, 2)) -#define ANA_SFLOW_CFG_SF_RATE_M GENMASK(13, 2) -#define ANA_SFLOW_CFG_SF_RATE_X(x) (((x) & GENMASK(13, 2)) >> 2) -#define ANA_SFLOW_CFG_SF_SAMPLE_RX BIT(1) -#define ANA_SFLOW_CFG_SF_SAMPLE_TX BIT(0) - -#define ANA_PORT_MODE_RSZ 0x4 - -#define ANA_PORT_MODE_REDTAG_PARSE_CFG BIT(3) -#define ANA_PORT_MODE_VLAN_PARSE_CFG(x) (((x) << 1) & GENMASK(2, 1)) -#define ANA_PORT_MODE_VLAN_PARSE_CFG_M GENMASK(2, 1) -#define ANA_PORT_MODE_VLAN_PARSE_CFG_X(x) (((x) & GENMASK(2, 1)) >> 1) -#define ANA_PORT_MODE_L3_PARSE_CFG BIT(0) - -#define ANA_CUT_THRU_CFG_RSZ 0x4 - -#define ANA_PGID_PGID_RSZ 0x4 - -#define ANA_PGID_PGID_PGID(x) ((x) & GENMASK(11, 0)) -#define ANA_PGID_PGID_PGID_M GENMASK(11, 0) -#define ANA_PGID_PGID_CPUQ_DST_PGID(x) (((x) << 27) & GENMASK(29, 27)) -#define ANA_PGID_PGID_CPUQ_DST_PGID_M GENMASK(29, 27) -#define ANA_PGID_PGID_CPUQ_DST_PGID_X(x) (((x) & GENMASK(29, 27)) >> 27) - -#define ANA_TABLES_MACHDATA_VID(x) (((x) << 16) & GENMASK(28, 16)) -#define ANA_TABLES_MACHDATA_VID_M GENMASK(28, 16) -#define ANA_TABLES_MACHDATA_VID_X(x) (((x) & GENMASK(28, 16)) >> 16) -#define ANA_TABLES_MACHDATA_MACHDATA(x) ((x) & GENMASK(15, 0)) -#define ANA_TABLES_MACHDATA_MACHDATA_M GENMASK(15, 0) - -#define ANA_TABLES_STREAMDATA_SSID_VALID BIT(16) -#define ANA_TABLES_STREAMDATA_SSID(x) (((x) << 9) & GENMASK(15, 9)) -#define ANA_TABLES_STREAMDATA_SSID_M GENMASK(15, 9) -#define ANA_TABLES_STREAMDATA_SSID_X(x) (((x) & GENMASK(15, 9)) >> 9) -#define ANA_TABLES_STREAMDATA_SFID_VALID BIT(8) -#define ANA_TABLES_STREAMDATA_SFID(x) ((x) & GENMASK(7, 0)) -#define ANA_TABLES_STREAMDATA_SFID_M GENMASK(7, 0) - -#define ANA_TABLES_MACACCESS_MAC_CPU_COPY BIT(15) -#define ANA_TABLES_MACACCESS_SRC_KILL BIT(14) -#define ANA_TABLES_MACACCESS_IGNORE_VLAN BIT(13) -#define ANA_TABLES_MACACCESS_AGED_FLAG BIT(12) -#define ANA_TABLES_MACACCESS_VALID BIT(11) -#define ANA_TABLES_MACACCESS_ENTRYTYPE(x) (((x) << 9) & GENMASK(10, 9)) -#define ANA_TABLES_MACACCESS_ENTRYTYPE_M GENMASK(10, 9) -#define ANA_TABLES_MACACCESS_ENTRYTYPE_X(x) (((x) & GENMASK(10, 9)) >> 9) -#define ANA_TABLES_MACACCESS_DEST_IDX(x) (((x) << 3) & GENMASK(8, 3)) -#define ANA_TABLES_MACACCESS_DEST_IDX_M GENMASK(8, 3) -#define ANA_TABLES_MACACCESS_DEST_IDX_X(x) (((x) & GENMASK(8, 3)) >> 3) -#define ANA_TABLES_MACACCESS_MAC_TABLE_CMD(x) ((x) & GENMASK(2, 0)) -#define ANA_TABLES_MACACCESS_MAC_TABLE_CMD_M GENMASK(2, 0) -#define MACACCESS_CMD_IDLE 0 -#define MACACCESS_CMD_LEARN 1 -#define MACACCESS_CMD_FORGET 2 -#define MACACCESS_CMD_AGE 3 -#define MACACCESS_CMD_GET_NEXT 4 -#define MACACCESS_CMD_INIT 5 -#define MACACCESS_CMD_READ 6 -#define MACACCESS_CMD_WRITE 7 - -#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK(x) (((x) << 2) & GENMASK(13, 2)) -#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK_M GENMASK(13, 2) -#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK_X(x) (((x) & GENMASK(13, 2)) >> 2) -#define ANA_TABLES_VLANACCESS_VLAN_TBL_CMD(x) ((x) & GENMASK(1, 0)) -#define ANA_TABLES_VLANACCESS_VLAN_TBL_CMD_M GENMASK(1, 0) -#define ANA_TABLES_VLANACCESS_CMD_IDLE 0x0 -#define ANA_TABLES_VLANACCESS_CMD_WRITE 0x2 -#define ANA_TABLES_VLANACCESS_CMD_INIT 0x3 - -#define ANA_TABLES_VLANTIDX_VLAN_SEC_FWD_ENA BIT(17) -#define ANA_TABLES_VLANTIDX_VLAN_FLOOD_DIS BIT(16) -#define ANA_TABLES_VLANTIDX_VLAN_PRIV_VLAN BIT(15) -#define ANA_TABLES_VLANTIDX_VLAN_LEARN_DISABLED BIT(14) -#define ANA_TABLES_VLANTIDX_VLAN_MIRROR BIT(13) -#define ANA_TABLES_VLANTIDX_VLAN_SRC_CHK BIT(12) -#define ANA_TABLES_VLANTIDX_V_INDEX(x) ((x) & GENMASK(11, 0)) -#define ANA_TABLES_VLANTIDX_V_INDEX_M GENMASK(11, 0) - -#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK(x) (((x) << 2) & GENMASK(8, 2)) -#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK_M GENMASK(8, 2) -#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK_X(x) (((x) & GENMASK(8, 2)) >> 2) -#define ANA_TABLES_ISDXACCESS_ISDX_TBL_CMD(x) ((x) & GENMASK(1, 0)) -#define ANA_TABLES_ISDXACCESS_ISDX_TBL_CMD_M GENMASK(1, 0) - -#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI(x) (((x) << 21) & GENMASK(28, 21)) -#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI_M GENMASK(28, 21) -#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI_X(x) (((x) & GENMASK(28, 21)) >> 21) -#define ANA_TABLES_ISDXTIDX_ISDX_MSTI(x) (((x) << 15) & GENMASK(20, 15)) -#define ANA_TABLES_ISDXTIDX_ISDX_MSTI_M GENMASK(20, 15) -#define ANA_TABLES_ISDXTIDX_ISDX_MSTI_X(x) (((x) & GENMASK(20, 15)) >> 15) -#define ANA_TABLES_ISDXTIDX_ISDX_ES0_KEY_ENA BIT(14) -#define ANA_TABLES_ISDXTIDX_ISDX_FORCE_ENA BIT(10) -#define ANA_TABLES_ISDXTIDX_ISDX_INDEX(x) ((x) & GENMASK(7, 0)) -#define ANA_TABLES_ISDXTIDX_ISDX_INDEX_M GENMASK(7, 0) - -#define ANA_TABLES_ENTRYLIM_RSZ 0x4 - -#define ANA_TABLES_ENTRYLIM_ENTRYLIM(x) (((x) << 14) & GENMASK(17, 14)) -#define ANA_TABLES_ENTRYLIM_ENTRYLIM_M GENMASK(17, 14) -#define ANA_TABLES_ENTRYLIM_ENTRYLIM_X(x) (((x) & GENMASK(17, 14)) >> 14) -#define ANA_TABLES_ENTRYLIM_ENTRYSTAT(x) ((x) & GENMASK(13, 0)) -#define ANA_TABLES_ENTRYLIM_ENTRYSTAT_M GENMASK(13, 0) - -#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM(x) (((x) << 4) & GENMASK(31, 4)) -#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM_M GENMASK(31, 4) -#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM_X(x) (((x) & GENMASK(31, 4)) >> 4) -#define ANA_TABLES_STREAMACCESS_SEQ_GEN_REC_ENA BIT(3) -#define ANA_TABLES_STREAMACCESS_GEN_REC_TYPE BIT(2) -#define ANA_TABLES_STREAMACCESS_STREAM_TBL_CMD(x) ((x) & GENMASK(1, 0)) -#define ANA_TABLES_STREAMACCESS_STREAM_TBL_CMD_M GENMASK(1, 0) - -#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS(x) (((x) << 30) & GENMASK(31, 30)) -#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS_M GENMASK(31, 30) -#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS_X(x) (((x) & GENMASK(31, 30)) >> 30) -#define ANA_TABLES_STREAMTIDX_S_INDEX(x) (((x) << 16) & GENMASK(22, 16)) -#define ANA_TABLES_STREAMTIDX_S_INDEX_M GENMASK(22, 16) -#define ANA_TABLES_STREAMTIDX_S_INDEX_X(x) (((x) & GENMASK(22, 16)) >> 16) -#define ANA_TABLES_STREAMTIDX_FORCE_SF_BEHAVIOUR BIT(14) -#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN(x) (((x) << 8) & GENMASK(13, 8)) -#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN_M GENMASK(13, 8) -#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN_X(x) (((x) & GENMASK(13, 8)) >> 8) -#define ANA_TABLES_STREAMTIDX_RESET_ON_ROGUE BIT(7) -#define ANA_TABLES_STREAMTIDX_REDTAG_POP BIT(6) -#define ANA_TABLES_STREAMTIDX_STREAM_SPLIT BIT(5) -#define ANA_TABLES_STREAMTIDX_SEQ_SPACE_LOG2(x) ((x) & GENMASK(4, 0)) -#define ANA_TABLES_STREAMTIDX_SEQ_SPACE_LOG2_M GENMASK(4, 0) - -#define ANA_TABLES_SEQ_MASK_SPLIT_MASK(x) (((x) << 16) & GENMASK(22, 16)) -#define ANA_TABLES_SEQ_MASK_SPLIT_MASK_M GENMASK(22, 16) -#define ANA_TABLES_SEQ_MASK_SPLIT_MASK_X(x) (((x) & GENMASK(22, 16)) >> 16) -#define ANA_TABLES_SEQ_MASK_INPUT_PORT_MASK(x) ((x) & GENMASK(6, 0)) -#define ANA_TABLES_SEQ_MASK_INPUT_PORT_MASK_M GENMASK(6, 0) - -#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK(x) (((x) << 1) & GENMASK(7, 1)) -#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK_M GENMASK(7, 1) -#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK_X(x) (((x) & GENMASK(7, 1)) >> 1) -#define ANA_TABLES_SFID_MASK_IGR_SRCPORT_MATCH_ENA BIT(0) - -#define ANA_TABLES_SFIDACCESS_IGR_PRIO_MATCH_ENA BIT(22) -#define ANA_TABLES_SFIDACCESS_IGR_PRIO(x) (((x) << 19) & GENMASK(21, 19)) -#define ANA_TABLES_SFIDACCESS_IGR_PRIO_M GENMASK(21, 19) -#define ANA_TABLES_SFIDACCESS_IGR_PRIO_X(x) (((x) & GENMASK(21, 19)) >> 19) -#define ANA_TABLES_SFIDACCESS_FORCE_BLOCK BIT(18) -#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN(x) (((x) << 2) & GENMASK(17, 2)) -#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN_M GENMASK(17, 2) -#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN_X(x) (((x) & GENMASK(17, 2)) >> 2) -#define ANA_TABLES_SFIDACCESS_SFID_TBL_CMD(x) ((x) & GENMASK(1, 0)) -#define ANA_TABLES_SFIDACCESS_SFID_TBL_CMD_M GENMASK(1, 0) - -#define ANA_TABLES_SFIDTIDX_SGID_VALID BIT(26) -#define ANA_TABLES_SFIDTIDX_SGID(x) (((x) << 18) & GENMASK(25, 18)) -#define ANA_TABLES_SFIDTIDX_SGID_M GENMASK(25, 18) -#define ANA_TABLES_SFIDTIDX_SGID_X(x) (((x) & GENMASK(25, 18)) >> 18) -#define ANA_TABLES_SFIDTIDX_POL_ENA BIT(17) -#define ANA_TABLES_SFIDTIDX_POL_IDX(x) (((x) << 8) & GENMASK(16, 8)) -#define ANA_TABLES_SFIDTIDX_POL_IDX_M GENMASK(16, 8) -#define ANA_TABLES_SFIDTIDX_POL_IDX_X(x) (((x) & GENMASK(16, 8)) >> 8) -#define ANA_TABLES_SFIDTIDX_SFID_INDEX(x) ((x) & GENMASK(7, 0)) -#define ANA_TABLES_SFIDTIDX_SFID_INDEX_M GENMASK(7, 0) - -#define ANA_MSTI_STATE_RSZ 0x4 - -#define ANA_OAM_UPM_LM_CNT_RSZ 0x4 - -#define ANA_SG_ACCESS_CTRL_SGID(x) ((x) & GENMASK(7, 0)) -#define ANA_SG_ACCESS_CTRL_SGID_M GENMASK(7, 0) -#define ANA_SG_ACCESS_CTRL_CONFIG_CHANGE BIT(28) - -#define ANA_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) -#define ANA_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) -#define ANA_SG_CONFIG_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(18, 16)) -#define ANA_SG_CONFIG_REG_3_LIST_LENGTH_M GENMASK(18, 16) -#define ANA_SG_CONFIG_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(18, 16)) >> 16) -#define ANA_SG_CONFIG_REG_3_GATE_ENABLE BIT(20) -#define ANA_SG_CONFIG_REG_3_INIT_IPS(x) (((x) << 24) & GENMASK(27, 24)) -#define ANA_SG_CONFIG_REG_3_INIT_IPS_M GENMASK(27, 24) -#define ANA_SG_CONFIG_REG_3_INIT_IPS_X(x) (((x) & GENMASK(27, 24)) >> 24) -#define ANA_SG_CONFIG_REG_3_INIT_GATE_STATE BIT(28) - -#define ANA_SG_GCL_GS_CONFIG_RSZ 0x4 - -#define ANA_SG_GCL_GS_CONFIG_IPS(x) ((x) & GENMASK(3, 0)) -#define ANA_SG_GCL_GS_CONFIG_IPS_M GENMASK(3, 0) -#define ANA_SG_GCL_GS_CONFIG_GATE_STATE BIT(4) - -#define ANA_SG_GCL_TI_CONFIG_RSZ 0x4 - -#define ANA_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) -#define ANA_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_M GENMASK(15, 0) -#define ANA_SG_STATUS_REG_3_GATE_STATE BIT(16) -#define ANA_SG_STATUS_REG_3_IPS(x) (((x) << 20) & GENMASK(23, 20)) -#define ANA_SG_STATUS_REG_3_IPS_M GENMASK(23, 20) -#define ANA_SG_STATUS_REG_3_IPS_X(x) (((x) & GENMASK(23, 20)) >> 20) -#define ANA_SG_STATUS_REG_3_CONFIG_PENDING BIT(24) - -#define ANA_PORT_VLAN_CFG_GSZ 0x100 - -#define ANA_PORT_VLAN_CFG_VLAN_VID_AS_ISDX BIT(21) -#define ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA BIT(20) -#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT(x) (((x) << 18) & GENMASK(19, 18)) -#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M GENMASK(19, 18) -#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT_X(x) (((x) & GENMASK(19, 18)) >> 18) -#define ANA_PORT_VLAN_CFG_VLAN_INNER_TAG_ENA BIT(17) -#define ANA_PORT_VLAN_CFG_VLAN_TAG_TYPE BIT(16) -#define ANA_PORT_VLAN_CFG_VLAN_DEI BIT(15) -#define ANA_PORT_VLAN_CFG_VLAN_PCP(x) (((x) << 12) & GENMASK(14, 12)) -#define ANA_PORT_VLAN_CFG_VLAN_PCP_M GENMASK(14, 12) -#define ANA_PORT_VLAN_CFG_VLAN_PCP_X(x) (((x) & GENMASK(14, 12)) >> 12) -#define ANA_PORT_VLAN_CFG_VLAN_VID(x) ((x) & GENMASK(11, 0)) -#define ANA_PORT_VLAN_CFG_VLAN_VID_M GENMASK(11, 0) - -#define ANA_PORT_DROP_CFG_GSZ 0x100 - -#define ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA BIT(6) -#define ANA_PORT_DROP_CFG_DROP_S_TAGGED_ENA BIT(5) -#define ANA_PORT_DROP_CFG_DROP_C_TAGGED_ENA BIT(4) -#define ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA BIT(3) -#define ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA BIT(2) -#define ANA_PORT_DROP_CFG_DROP_NULL_MAC_ENA BIT(1) -#define ANA_PORT_DROP_CFG_DROP_MC_SMAC_ENA BIT(0) - -#define ANA_PORT_QOS_CFG_GSZ 0x100 - -#define ANA_PORT_QOS_CFG_DP_DEFAULT_VAL BIT(8) -#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL(x) (((x) << 5) & GENMASK(7, 5)) -#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL_M GENMASK(7, 5) -#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL_X(x) (((x) & GENMASK(7, 5)) >> 5) -#define ANA_PORT_QOS_CFG_QOS_DSCP_ENA BIT(4) -#define ANA_PORT_QOS_CFG_QOS_PCP_ENA BIT(3) -#define ANA_PORT_QOS_CFG_DSCP_TRANSLATE_ENA BIT(2) -#define ANA_PORT_QOS_CFG_DSCP_REWR_CFG(x) ((x) & GENMASK(1, 0)) -#define ANA_PORT_QOS_CFG_DSCP_REWR_CFG_M GENMASK(1, 0) - -#define ANA_PORT_VCAP_CFG_GSZ 0x100 - -#define ANA_PORT_VCAP_CFG_S1_ENA BIT(14) -#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA(x) (((x) << 11) & GENMASK(13, 11)) -#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA_M GENMASK(13, 11) -#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA_X(x) (((x) & GENMASK(13, 11)) >> 11) -#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA(x) (((x) << 8) & GENMASK(10, 8)) -#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA_M GENMASK(10, 8) -#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA_X(x) (((x) & GENMASK(10, 8)) >> 8) -#define ANA_PORT_VCAP_CFG_PAG_VAL(x) ((x) & GENMASK(7, 0)) -#define ANA_PORT_VCAP_CFG_PAG_VAL_M GENMASK(7, 0) - -#define ANA_PORT_VCAP_S1_KEY_CFG_GSZ 0x100 -#define ANA_PORT_VCAP_S1_KEY_CFG_RSZ 0x4 - -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG(x) (((x) << 4) & GENMASK(6, 4)) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG_M GENMASK(6, 4) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG_X(x) (((x) & GENMASK(6, 4)) >> 4) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG(x) (((x) << 2) & GENMASK(3, 2)) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG_M GENMASK(3, 2) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG_X(x) (((x) & GENMASK(3, 2)) >> 2) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_OTHER_CFG(x) ((x) & GENMASK(1, 0)) -#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_OTHER_CFG_M GENMASK(1, 0) - -#define ANA_PORT_VCAP_S2_CFG_GSZ 0x100 - -#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA(x) (((x) << 17) & GENMASK(18, 17)) -#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA_M GENMASK(18, 17) -#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA_X(x) (((x) & GENMASK(18, 17)) >> 17) -#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA(x) (((x) << 15) & GENMASK(16, 15)) -#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA_M GENMASK(16, 15) -#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA_X(x) (((x) & GENMASK(16, 15)) >> 15) -#define ANA_PORT_VCAP_S2_CFG_S2_ENA BIT(14) -#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS(x) (((x) << 12) & GENMASK(13, 12)) -#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS_M GENMASK(13, 12) -#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS_X(x) (((x) & GENMASK(13, 12)) >> 12) -#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS(x) (((x) << 10) & GENMASK(11, 10)) -#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS_M GENMASK(11, 10) -#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS_X(x) (((x) & GENMASK(11, 10)) >> 10) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS(x) (((x) << 8) & GENMASK(9, 8)) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS_M GENMASK(9, 8) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS_X(x) (((x) & GENMASK(9, 8)) >> 8) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS(x) (((x) << 6) & GENMASK(7, 6)) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS_M GENMASK(7, 6) -#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS_X(x) (((x) & GENMASK(7, 6)) >> 6) -#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG(x) (((x) << 2) & GENMASK(5, 2)) -#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG_M GENMASK(5, 2) -#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG_X(x) (((x) & GENMASK(5, 2)) >> 2) -#define ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS(x) ((x) & GENMASK(1, 0)) -#define ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS_M GENMASK(1, 0) - -#define ANA_PORT_PCP_DEI_MAP_GSZ 0x100 -#define ANA_PORT_PCP_DEI_MAP_RSZ 0x4 - -#define ANA_PORT_PCP_DEI_MAP_DP_PCP_DEI_VAL BIT(3) -#define ANA_PORT_PCP_DEI_MAP_QOS_PCP_DEI_VAL(x) ((x) & GENMASK(2, 0)) -#define ANA_PORT_PCP_DEI_MAP_QOS_PCP_DEI_VAL_M GENMASK(2, 0) - -#define ANA_PORT_CPU_FWD_CFG_GSZ 0x100 - -#define ANA_PORT_CPU_FWD_CFG_CPU_VRAP_REDIR_ENA BIT(7) -#define ANA_PORT_CPU_FWD_CFG_CPU_MLD_REDIR_ENA BIT(6) -#define ANA_PORT_CPU_FWD_CFG_CPU_IGMP_REDIR_ENA BIT(5) -#define ANA_PORT_CPU_FWD_CFG_CPU_IPMC_CTRL_COPY_ENA BIT(4) -#define ANA_PORT_CPU_FWD_CFG_CPU_SRC_COPY_ENA BIT(3) -#define ANA_PORT_CPU_FWD_CFG_CPU_ALLBRIDGE_DROP_ENA BIT(2) -#define ANA_PORT_CPU_FWD_CFG_CPU_ALLBRIDGE_REDIR_ENA BIT(1) -#define ANA_PORT_CPU_FWD_CFG_CPU_OAM_ENA BIT(0) - -#define ANA_PORT_CPU_FWD_BPDU_CFG_GSZ 0x100 - -#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) -#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA_M GENMASK(31, 16) -#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA(x) ((x) & GENMASK(15, 0)) -#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA_M GENMASK(15, 0) - -#define ANA_PORT_CPU_FWD_GARP_CFG_GSZ 0x100 - -#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) -#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA_M GENMASK(31, 16) -#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_REDIR_ENA(x) ((x) & GENMASK(15, 0)) -#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_REDIR_ENA_M GENMASK(15, 0) - -#define ANA_PORT_CPU_FWD_CCM_CFG_GSZ 0x100 - -#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) -#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA_M GENMASK(31, 16) -#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_REDIR_ENA(x) ((x) & GENMASK(15, 0)) -#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_REDIR_ENA_M GENMASK(15, 0) - -#define ANA_PORT_PORT_CFG_GSZ 0x100 - -#define ANA_PORT_PORT_CFG_SRC_MIRROR_ENA BIT(15) -#define ANA_PORT_PORT_CFG_LIMIT_DROP BIT(14) -#define ANA_PORT_PORT_CFG_LIMIT_CPU BIT(13) -#define ANA_PORT_PORT_CFG_LOCKED_PORTMOVE_DROP BIT(12) -#define ANA_PORT_PORT_CFG_LOCKED_PORTMOVE_CPU BIT(11) -#define ANA_PORT_PORT_CFG_LEARNDROP BIT(10) -#define ANA_PORT_PORT_CFG_LEARNCPU BIT(9) -#define ANA_PORT_PORT_CFG_LEARNAUTO BIT(8) -#define ANA_PORT_PORT_CFG_LEARN_ENA BIT(7) -#define ANA_PORT_PORT_CFG_RECV_ENA BIT(6) -#define ANA_PORT_PORT_CFG_PORTID_VAL(x) (((x) << 2) & GENMASK(5, 2)) -#define ANA_PORT_PORT_CFG_PORTID_VAL_M GENMASK(5, 2) -#define ANA_PORT_PORT_CFG_PORTID_VAL_X(x) (((x) & GENMASK(5, 2)) >> 2) -#define ANA_PORT_PORT_CFG_USE_B_DOM_TBL BIT(1) -#define ANA_PORT_PORT_CFG_LSR_MODE BIT(0) - -#define ANA_PORT_POL_CFG_GSZ 0x100 - -#define ANA_PORT_POL_CFG_POL_CPU_REDIR_8021 BIT(19) -#define ANA_PORT_POL_CFG_POL_CPU_REDIR_IP BIT(18) -#define ANA_PORT_POL_CFG_PORT_POL_ENA BIT(17) -#define ANA_PORT_POL_CFG_QUEUE_POL_ENA(x) (((x) << 9) & GENMASK(16, 9)) -#define ANA_PORT_POL_CFG_QUEUE_POL_ENA_M GENMASK(16, 9) -#define ANA_PORT_POL_CFG_QUEUE_POL_ENA_X(x) (((x) & GENMASK(16, 9)) >> 9) -#define ANA_PORT_POL_CFG_POL_ORDER(x) ((x) & GENMASK(8, 0)) -#define ANA_PORT_POL_CFG_POL_ORDER_M GENMASK(8, 0) - -#define ANA_PORT_PTP_CFG_GSZ 0x100 - -#define ANA_PORT_PTP_CFG_PTP_BACKPLANE_MODE BIT(0) - -#define ANA_PORT_PTP_DLY1_CFG_GSZ 0x100 - -#define ANA_PORT_PTP_DLY2_CFG_GSZ 0x100 - -#define ANA_PORT_SFID_CFG_GSZ 0x100 -#define ANA_PORT_SFID_CFG_RSZ 0x4 - -#define ANA_PORT_SFID_CFG_SFID_VALID BIT(8) -#define ANA_PORT_SFID_CFG_SFID(x) ((x) & GENMASK(7, 0)) -#define ANA_PORT_SFID_CFG_SFID_M GENMASK(7, 0) - -#define ANA_PFC_PFC_CFG_GSZ 0x40 - -#define ANA_PFC_PFC_CFG_RX_PFC_ENA(x) (((x) << 2) & GENMASK(9, 2)) -#define ANA_PFC_PFC_CFG_RX_PFC_ENA_M GENMASK(9, 2) -#define ANA_PFC_PFC_CFG_RX_PFC_ENA_X(x) (((x) & GENMASK(9, 2)) >> 2) -#define ANA_PFC_PFC_CFG_FC_LINK_SPEED(x) ((x) & GENMASK(1, 0)) -#define ANA_PFC_PFC_CFG_FC_LINK_SPEED_M GENMASK(1, 0) - -#define ANA_PFC_PFC_TIMER_GSZ 0x40 -#define ANA_PFC_PFC_TIMER_RSZ 0x4 - -#define ANA_IPT_OAM_MEP_CFG_GSZ 0x8 - -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P(x) (((x) << 6) & GENMASK(10, 6)) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P_M GENMASK(10, 6) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P_X(x) (((x) & GENMASK(10, 6)) >> 6) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX(x) (((x) << 1) & GENMASK(5, 1)) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_M GENMASK(5, 1) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_X(x) (((x) & GENMASK(5, 1)) >> 1) -#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_ENA BIT(0) - -#define ANA_IPT_IPT_GSZ 0x8 - -#define ANA_IPT_IPT_IPT_CFG(x) (((x) << 15) & GENMASK(16, 15)) -#define ANA_IPT_IPT_IPT_CFG_M GENMASK(16, 15) -#define ANA_IPT_IPT_IPT_CFG_X(x) (((x) & GENMASK(16, 15)) >> 15) -#define ANA_IPT_IPT_ISDX_P(x) (((x) << 7) & GENMASK(14, 7)) -#define ANA_IPT_IPT_ISDX_P_M GENMASK(14, 7) -#define ANA_IPT_IPT_ISDX_P_X(x) (((x) & GENMASK(14, 7)) >> 7) -#define ANA_IPT_IPT_PPT_IDX(x) ((x) & GENMASK(6, 0)) -#define ANA_IPT_IPT_PPT_IDX_M GENMASK(6, 0) - -#define ANA_PPT_PPT_RSZ 0x4 - -#define ANA_FID_MAP_FID_MAP_RSZ 0x4 - -#define ANA_FID_MAP_FID_MAP_FID_C_VAL(x) (((x) << 6) & GENMASK(11, 6)) -#define ANA_FID_MAP_FID_MAP_FID_C_VAL_M GENMASK(11, 6) -#define ANA_FID_MAP_FID_MAP_FID_C_VAL_X(x) (((x) & GENMASK(11, 6)) >> 6) -#define ANA_FID_MAP_FID_MAP_FID_B_VAL(x) ((x) & GENMASK(5, 0)) -#define ANA_FID_MAP_FID_MAP_FID_B_VAL_M GENMASK(5, 0) - -#define ANA_AGGR_CFG_AC_RND_ENA BIT(7) -#define ANA_AGGR_CFG_AC_DMAC_ENA BIT(6) -#define ANA_AGGR_CFG_AC_SMAC_ENA BIT(5) -#define ANA_AGGR_CFG_AC_IP6_FLOW_LBL_ENA BIT(4) -#define ANA_AGGR_CFG_AC_IP6_TCPUDP_ENA BIT(3) -#define ANA_AGGR_CFG_AC_IP4_SIPDIP_ENA BIT(2) -#define ANA_AGGR_CFG_AC_IP4_TCPUDP_ENA BIT(1) -#define ANA_AGGR_CFG_AC_ISDX_ENA BIT(0) - -#define ANA_CPUQ_CFG_CPUQ_MLD(x) (((x) << 27) & GENMASK(29, 27)) -#define ANA_CPUQ_CFG_CPUQ_MLD_M GENMASK(29, 27) -#define ANA_CPUQ_CFG_CPUQ_MLD_X(x) (((x) & GENMASK(29, 27)) >> 27) -#define ANA_CPUQ_CFG_CPUQ_IGMP(x) (((x) << 24) & GENMASK(26, 24)) -#define ANA_CPUQ_CFG_CPUQ_IGMP_M GENMASK(26, 24) -#define ANA_CPUQ_CFG_CPUQ_IGMP_X(x) (((x) & GENMASK(26, 24)) >> 24) -#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL(x) (((x) << 21) & GENMASK(23, 21)) -#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL_M GENMASK(23, 21) -#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL_X(x) (((x) & GENMASK(23, 21)) >> 21) -#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE(x) (((x) << 18) & GENMASK(20, 18)) -#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE_M GENMASK(20, 18) -#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE_X(x) (((x) & GENMASK(20, 18)) >> 18) -#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE(x) (((x) << 15) & GENMASK(17, 15)) -#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE_M GENMASK(17, 15) -#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE_X(x) (((x) & GENMASK(17, 15)) >> 15) -#define ANA_CPUQ_CFG_CPUQ_SRC_COPY(x) (((x) << 12) & GENMASK(14, 12)) -#define ANA_CPUQ_CFG_CPUQ_SRC_COPY_M GENMASK(14, 12) -#define ANA_CPUQ_CFG_CPUQ_SRC_COPY_X(x) (((x) & GENMASK(14, 12)) >> 12) -#define ANA_CPUQ_CFG_CPUQ_MAC_COPY(x) (((x) << 9) & GENMASK(11, 9)) -#define ANA_CPUQ_CFG_CPUQ_MAC_COPY_M GENMASK(11, 9) -#define ANA_CPUQ_CFG_CPUQ_MAC_COPY_X(x) (((x) & GENMASK(11, 9)) >> 9) -#define ANA_CPUQ_CFG_CPUQ_LRN(x) (((x) << 6) & GENMASK(8, 6)) -#define ANA_CPUQ_CFG_CPUQ_LRN_M GENMASK(8, 6) -#define ANA_CPUQ_CFG_CPUQ_LRN_X(x) (((x) & GENMASK(8, 6)) >> 6) -#define ANA_CPUQ_CFG_CPUQ_MIRROR(x) (((x) << 3) & GENMASK(5, 3)) -#define ANA_CPUQ_CFG_CPUQ_MIRROR_M GENMASK(5, 3) -#define ANA_CPUQ_CFG_CPUQ_MIRROR_X(x) (((x) & GENMASK(5, 3)) >> 3) -#define ANA_CPUQ_CFG_CPUQ_SFLOW(x) ((x) & GENMASK(2, 0)) -#define ANA_CPUQ_CFG_CPUQ_SFLOW_M GENMASK(2, 0) - -#define ANA_CPUQ_8021_CFG_RSZ 0x4 - -#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL(x) (((x) << 6) & GENMASK(8, 6)) -#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL_M GENMASK(8, 6) -#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL_X(x) (((x) & GENMASK(8, 6)) >> 6) -#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL(x) (((x) << 3) & GENMASK(5, 3)) -#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL_M GENMASK(5, 3) -#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL_X(x) (((x) & GENMASK(5, 3)) >> 3) -#define ANA_CPUQ_8021_CFG_CPUQ_CCM_VAL(x) ((x) & GENMASK(2, 0)) -#define ANA_CPUQ_8021_CFG_CPUQ_CCM_VAL_M GENMASK(2, 0) - -#define ANA_DSCP_CFG_RSZ 0x4 - -#define ANA_DSCP_CFG_DP_DSCP_VAL BIT(11) -#define ANA_DSCP_CFG_QOS_DSCP_VAL(x) (((x) << 8) & GENMASK(10, 8)) -#define ANA_DSCP_CFG_QOS_DSCP_VAL_M GENMASK(10, 8) -#define ANA_DSCP_CFG_QOS_DSCP_VAL_X(x) (((x) & GENMASK(10, 8)) >> 8) -#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL(x) (((x) << 2) & GENMASK(7, 2)) -#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL_M GENMASK(7, 2) -#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL_X(x) (((x) & GENMASK(7, 2)) >> 2) -#define ANA_DSCP_CFG_DSCP_TRUST_ENA BIT(1) -#define ANA_DSCP_CFG_DSCP_REWR_ENA BIT(0) - -#define ANA_DSCP_REWR_CFG_RSZ 0x4 - -#define ANA_VCAP_RNG_TYPE_CFG_RSZ 0x4 - -#define ANA_VCAP_RNG_VAL_CFG_RSZ 0x4 - -#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL(x) (((x) << 16) & GENMASK(31, 16)) -#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL_M GENMASK(31, 16) -#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MAX_VAL(x) ((x) & GENMASK(15, 0)) -#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MAX_VAL_M GENMASK(15, 0) - -#define ANA_VRAP_CFG_VRAP_VLAN_AWARE_ENA BIT(12) -#define ANA_VRAP_CFG_VRAP_VID(x) ((x) & GENMASK(11, 0)) -#define ANA_VRAP_CFG_VRAP_VID_M GENMASK(11, 0) - -#define ANA_DISCARD_CFG_DROP_TAGGING_ISDX0 BIT(3) -#define ANA_DISCARD_CFG_DROP_CTRLPROT_ISDX0 BIT(2) -#define ANA_DISCARD_CFG_DROP_TAGGING_S2_ENA BIT(1) -#define ANA_DISCARD_CFG_DROP_CTRLPROT_S2_ENA BIT(0) - -#define ANA_FID_CFG_VID_MC_ENA BIT(0) - -#define ANA_POL_PIR_CFG_GSZ 0x20 - -#define ANA_POL_PIR_CFG_PIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) -#define ANA_POL_PIR_CFG_PIR_RATE_M GENMASK(20, 6) -#define ANA_POL_PIR_CFG_PIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) -#define ANA_POL_PIR_CFG_PIR_BURST(x) ((x) & GENMASK(5, 0)) -#define ANA_POL_PIR_CFG_PIR_BURST_M GENMASK(5, 0) - -#define ANA_POL_CIR_CFG_GSZ 0x20 - -#define ANA_POL_CIR_CFG_CIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) -#define ANA_POL_CIR_CFG_CIR_RATE_M GENMASK(20, 6) -#define ANA_POL_CIR_CFG_CIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) -#define ANA_POL_CIR_CFG_CIR_BURST(x) ((x) & GENMASK(5, 0)) -#define ANA_POL_CIR_CFG_CIR_BURST_M GENMASK(5, 0) - -#define ANA_POL_MODE_CFG_GSZ 0x20 - -#define ANA_POL_MODE_CFG_IPG_SIZE(x) (((x) << 5) & GENMASK(9, 5)) -#define ANA_POL_MODE_CFG_IPG_SIZE_M GENMASK(9, 5) -#define ANA_POL_MODE_CFG_IPG_SIZE_X(x) (((x) & GENMASK(9, 5)) >> 5) -#define ANA_POL_MODE_CFG_FRM_MODE(x) (((x) << 3) & GENMASK(4, 3)) -#define ANA_POL_MODE_CFG_FRM_MODE_M GENMASK(4, 3) -#define ANA_POL_MODE_CFG_FRM_MODE_X(x) (((x) & GENMASK(4, 3)) >> 3) -#define ANA_POL_MODE_CFG_DLB_COUPLED BIT(2) -#define ANA_POL_MODE_CFG_CIR_ENA BIT(1) -#define ANA_POL_MODE_CFG_OVERSHOOT_ENA BIT(0) - -#define ANA_POL_PIR_STATE_GSZ 0x20 - -#define ANA_POL_CIR_STATE_GSZ 0x20 - -#define ANA_POL_STATE_GSZ 0x20 - -#define ANA_POL_FLOWC_RSZ 0x4 - -#define ANA_POL_FLOWC_POL_FLOWC BIT(0) - -#define ANA_POL_HYST_POL_FC_HYST(x) (((x) << 4) & GENMASK(9, 4)) -#define ANA_POL_HYST_POL_FC_HYST_M GENMASK(9, 4) -#define ANA_POL_HYST_POL_FC_HYST_X(x) (((x) & GENMASK(9, 4)) >> 4) -#define ANA_POL_HYST_POL_STOP_HYST(x) ((x) & GENMASK(3, 0)) -#define ANA_POL_HYST_POL_STOP_HYST_M GENMASK(3, 0) - -#define ANA_POL_MISC_CFG_POL_CLOSE_ALL BIT(1) -#define ANA_POL_MISC_CFG_POL_LEAK_DIS BIT(0) - -#endif diff --git a/drivers/net/ethernet/mscc/ocelot_dev.h b/drivers/net/ethernet/mscc/ocelot_dev.h deleted file mode 100644 index 0a50d53bbd3f..000000000000 --- a/drivers/net/ethernet/mscc/ocelot_dev.h +++ /dev/null @@ -1,275 +0,0 @@ -/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ -/* - * Microsemi Ocelot Switch driver - * - * Copyright (c) 2017 Microsemi Corporation - */ - -#ifndef _MSCC_OCELOT_DEV_H_ -#define _MSCC_OCELOT_DEV_H_ - -#define DEV_CLOCK_CFG 0x0 - -#define DEV_CLOCK_CFG_MAC_TX_RST BIT(7) -#define DEV_CLOCK_CFG_MAC_RX_RST BIT(6) -#define DEV_CLOCK_CFG_PCS_TX_RST BIT(5) -#define DEV_CLOCK_CFG_PCS_RX_RST BIT(4) -#define DEV_CLOCK_CFG_PORT_RST BIT(3) -#define DEV_CLOCK_CFG_PHY_RST BIT(2) -#define DEV_CLOCK_CFG_LINK_SPEED(x) ((x) & GENMASK(1, 0)) -#define DEV_CLOCK_CFG_LINK_SPEED_M GENMASK(1, 0) - -#define DEV_PORT_MISC 0x4 - -#define DEV_PORT_MISC_FWD_ERROR_ENA BIT(4) -#define DEV_PORT_MISC_FWD_PAUSE_ENA BIT(3) -#define DEV_PORT_MISC_FWD_CTRL_ENA BIT(2) -#define DEV_PORT_MISC_DEV_LOOP_ENA BIT(1) -#define DEV_PORT_MISC_HDX_FAST_DIS BIT(0) - -#define DEV_EVENTS 0x8 - -#define DEV_EEE_CFG 0xc - -#define DEV_EEE_CFG_EEE_ENA BIT(22) -#define DEV_EEE_CFG_EEE_TIMER_AGE(x) (((x) << 15) & GENMASK(21, 15)) -#define DEV_EEE_CFG_EEE_TIMER_AGE_M GENMASK(21, 15) -#define DEV_EEE_CFG_EEE_TIMER_AGE_X(x) (((x) & GENMASK(21, 15)) >> 15) -#define DEV_EEE_CFG_EEE_TIMER_WAKEUP(x) (((x) << 8) & GENMASK(14, 8)) -#define DEV_EEE_CFG_EEE_TIMER_WAKEUP_M GENMASK(14, 8) -#define DEV_EEE_CFG_EEE_TIMER_WAKEUP_X(x) (((x) & GENMASK(14, 8)) >> 8) -#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF(x) (((x) << 1) & GENMASK(7, 1)) -#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF_M GENMASK(7, 1) -#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF_X(x) (((x) & GENMASK(7, 1)) >> 1) -#define DEV_EEE_CFG_PORT_LPI BIT(0) - -#define DEV_RX_PATH_DELAY 0x10 - -#define DEV_TX_PATH_DELAY 0x14 - -#define DEV_PTP_PREDICT_CFG 0x18 - -#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG(x) (((x) << 4) & GENMASK(11, 4)) -#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG_M GENMASK(11, 4) -#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG_X(x) (((x) & GENMASK(11, 4)) >> 4) -#define DEV_PTP_PREDICT_CFG_PTP_PHASE_PREDICT_CFG(x) ((x) & GENMASK(3, 0)) -#define DEV_PTP_PREDICT_CFG_PTP_PHASE_PREDICT_CFG_M GENMASK(3, 0) - -#define DEV_MAC_ENA_CFG 0x1c - -#define DEV_MAC_ENA_CFG_RX_ENA BIT(4) -#define DEV_MAC_ENA_CFG_TX_ENA BIT(0) - -#define DEV_MAC_MODE_CFG 0x20 - -#define DEV_MAC_MODE_CFG_FC_WORD_SYNC_ENA BIT(8) -#define DEV_MAC_MODE_CFG_GIGA_MODE_ENA BIT(4) -#define DEV_MAC_MODE_CFG_FDX_ENA BIT(0) - -#define DEV_MAC_MAXLEN_CFG 0x24 - -#define DEV_MAC_TAGS_CFG 0x28 - -#define DEV_MAC_TAGS_CFG_TAG_ID(x) (((x) << 16) & GENMASK(31, 16)) -#define DEV_MAC_TAGS_CFG_TAG_ID_M GENMASK(31, 16) -#define DEV_MAC_TAGS_CFG_TAG_ID_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA BIT(2) -#define DEV_MAC_TAGS_CFG_PB_ENA BIT(1) -#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0) - -#define DEV_MAC_ADV_CHK_CFG 0x2c - -#define DEV_MAC_ADV_CHK_CFG_LEN_DROP_ENA BIT(0) - -#define DEV_MAC_IFG_CFG 0x30 - -#define DEV_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK BIT(17) -#define DEV_MAC_IFG_CFG_REDUCED_TX_IFG BIT(16) -#define DEV_MAC_IFG_CFG_TX_IFG(x) (((x) << 8) & GENMASK(12, 8)) -#define DEV_MAC_IFG_CFG_TX_IFG_M GENMASK(12, 8) -#define DEV_MAC_IFG_CFG_TX_IFG_X(x) (((x) & GENMASK(12, 8)) >> 8) -#define DEV_MAC_IFG_CFG_RX_IFG2(x) (((x) << 4) & GENMASK(7, 4)) -#define DEV_MAC_IFG_CFG_RX_IFG2_M GENMASK(7, 4) -#define DEV_MAC_IFG_CFG_RX_IFG2_X(x) (((x) & GENMASK(7, 4)) >> 4) -#define DEV_MAC_IFG_CFG_RX_IFG1(x) ((x) & GENMASK(3, 0)) -#define DEV_MAC_IFG_CFG_RX_IFG1_M GENMASK(3, 0) - -#define DEV_MAC_HDX_CFG 0x34 - -#define DEV_MAC_HDX_CFG_BYPASS_COL_SYNC BIT(26) -#define DEV_MAC_HDX_CFG_OB_ENA BIT(25) -#define DEV_MAC_HDX_CFG_WEXC_DIS BIT(24) -#define DEV_MAC_HDX_CFG_SEED(x) (((x) << 16) & GENMASK(23, 16)) -#define DEV_MAC_HDX_CFG_SEED_M GENMASK(23, 16) -#define DEV_MAC_HDX_CFG_SEED_X(x) (((x) & GENMASK(23, 16)) >> 16) -#define DEV_MAC_HDX_CFG_SEED_LOAD BIT(12) -#define DEV_MAC_HDX_CFG_RETRY_AFTER_EXC_COL_ENA BIT(8) -#define DEV_MAC_HDX_CFG_LATE_COL_POS(x) ((x) & GENMASK(6, 0)) -#define DEV_MAC_HDX_CFG_LATE_COL_POS_M GENMASK(6, 0) - -#define DEV_MAC_DBG_CFG 0x38 - -#define DEV_MAC_DBG_CFG_TBI_MODE BIT(4) -#define DEV_MAC_DBG_CFG_IFG_CRS_EXT_CHK_ENA BIT(0) - -#define DEV_MAC_FC_MAC_LOW_CFG 0x3c - -#define DEV_MAC_FC_MAC_HIGH_CFG 0x40 - -#define DEV_MAC_STICKY 0x44 - -#define DEV_MAC_STICKY_RX_IPG_SHRINK_STICKY BIT(9) -#define DEV_MAC_STICKY_RX_PREAM_SHRINK_STICKY BIT(8) -#define DEV_MAC_STICKY_RX_CARRIER_EXT_STICKY BIT(7) -#define DEV_MAC_STICKY_RX_CARRIER_EXT_ERR_STICKY BIT(6) -#define DEV_MAC_STICKY_RX_JUNK_STICKY BIT(5) -#define DEV_MAC_STICKY_TX_RETRANSMIT_STICKY BIT(4) -#define DEV_MAC_STICKY_TX_JAM_STICKY BIT(3) -#define DEV_MAC_STICKY_TX_FIFO_OFLW_STICKY BIT(2) -#define DEV_MAC_STICKY_TX_FRM_LEN_OVR_STICKY BIT(1) -#define DEV_MAC_STICKY_TX_ABORT_STICKY BIT(0) - -#define PCS1G_CFG 0x48 - -#define PCS1G_CFG_LINK_STATUS_TYPE BIT(4) -#define PCS1G_CFG_AN_LINK_CTRL_ENA BIT(1) -#define PCS1G_CFG_PCS_ENA BIT(0) - -#define PCS1G_MODE_CFG 0x4c - -#define PCS1G_MODE_CFG_UNIDIR_MODE_ENA BIT(4) -#define PCS1G_MODE_CFG_SGMII_MODE_ENA BIT(0) - -#define PCS1G_SD_CFG 0x50 - -#define PCS1G_SD_CFG_SD_SEL BIT(8) -#define PCS1G_SD_CFG_SD_POL BIT(4) -#define PCS1G_SD_CFG_SD_ENA BIT(0) - -#define PCS1G_ANEG_CFG 0x54 - -#define PCS1G_ANEG_CFG_ADV_ABILITY(x) (((x) << 16) & GENMASK(31, 16)) -#define PCS1G_ANEG_CFG_ADV_ABILITY_M GENMASK(31, 16) -#define PCS1G_ANEG_CFG_ADV_ABILITY_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define PCS1G_ANEG_CFG_SW_RESOLVE_ENA BIT(8) -#define PCS1G_ANEG_CFG_ANEG_RESTART_ONE_SHOT BIT(1) -#define PCS1G_ANEG_CFG_ANEG_ENA BIT(0) - -#define PCS1G_ANEG_NP_CFG 0x58 - -#define PCS1G_ANEG_NP_CFG_NP_TX(x) (((x) << 16) & GENMASK(31, 16)) -#define PCS1G_ANEG_NP_CFG_NP_TX_M GENMASK(31, 16) -#define PCS1G_ANEG_NP_CFG_NP_TX_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define PCS1G_ANEG_NP_CFG_NP_LOADED_ONE_SHOT BIT(0) - -#define PCS1G_LB_CFG 0x5c - -#define PCS1G_LB_CFG_RA_ENA BIT(4) -#define PCS1G_LB_CFG_GMII_PHY_LB_ENA BIT(1) -#define PCS1G_LB_CFG_TBI_HOST_LB_ENA BIT(0) - -#define PCS1G_DBG_CFG 0x60 - -#define PCS1G_DBG_CFG_UDLT BIT(0) - -#define PCS1G_CDET_CFG 0x64 - -#define PCS1G_CDET_CFG_CDET_ENA BIT(0) - -#define PCS1G_ANEG_STATUS 0x68 - -#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY(x) (((x) << 16) & GENMASK(31, 16)) -#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY_M GENMASK(31, 16) -#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY_X(x) (((x) & GENMASK(31, 16)) >> 16) -#define PCS1G_ANEG_STATUS_PR BIT(4) -#define PCS1G_ANEG_STATUS_PAGE_RX_STICKY BIT(3) -#define PCS1G_ANEG_STATUS_ANEG_COMPLETE BIT(0) - -#define PCS1G_ANEG_NP_STATUS 0x6c - -#define PCS1G_LINK_STATUS 0x70 - -#define PCS1G_LINK_STATUS_DELAY_VAR(x) (((x) << 12) & GENMASK(15, 12)) -#define PCS1G_LINK_STATUS_DELAY_VAR_M GENMASK(15, 12) -#define PCS1G_LINK_STATUS_DELAY_VAR_X(x) (((x) & GENMASK(15, 12)) >> 12) -#define PCS1G_LINK_STATUS_SIGNAL_DETECT BIT(8) -#define PCS1G_LINK_STATUS_LINK_STATUS BIT(4) -#define PCS1G_LINK_STATUS_SYNC_STATUS BIT(0) - -#define PCS1G_LINK_DOWN_CNT 0x74 - -#define PCS1G_STICKY 0x78 - -#define PCS1G_STICKY_LINK_DOWN_STICKY BIT(4) -#define PCS1G_STICKY_OUT_OF_SYNC_STICKY BIT(0) - -#define PCS1G_DEBUG_STATUS 0x7c - -#define PCS1G_LPI_CFG 0x80 - -#define PCS1G_LPI_CFG_QSGMII_MS_SEL BIT(20) -#define PCS1G_LPI_CFG_RX_LPI_OUT_DIS BIT(17) -#define PCS1G_LPI_CFG_LPI_TESTMODE BIT(16) -#define PCS1G_LPI_CFG_LPI_RX_WTIM(x) (((x) << 4) & GENMASK(5, 4)) -#define PCS1G_LPI_CFG_LPI_RX_WTIM_M GENMASK(5, 4) -#define PCS1G_LPI_CFG_LPI_RX_WTIM_X(x) (((x) & GENMASK(5, 4)) >> 4) -#define PCS1G_LPI_CFG_TX_ASSERT_LPIDLE BIT(0) - -#define PCS1G_LPI_WAKE_ERROR_CNT 0x84 - -#define PCS1G_LPI_STATUS 0x88 - -#define PCS1G_LPI_STATUS_RX_LPI_FAIL BIT(16) -#define PCS1G_LPI_STATUS_RX_LPI_EVENT_STICKY BIT(12) -#define PCS1G_LPI_STATUS_RX_QUIET BIT(9) -#define PCS1G_LPI_STATUS_RX_LPI_MODE BIT(8) -#define PCS1G_LPI_STATUS_TX_LPI_EVENT_STICKY BIT(4) -#define PCS1G_LPI_STATUS_TX_QUIET BIT(1) -#define PCS1G_LPI_STATUS_TX_LPI_MODE BIT(0) - -#define PCS1G_TSTPAT_MODE_CFG 0x8c - -#define PCS1G_TSTPAT_STATUS 0x90 - -#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT(x) (((x) << 8) & GENMASK(15, 8)) -#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT_M GENMASK(15, 8) -#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT_X(x) (((x) & GENMASK(15, 8)) >> 8) -#define PCS1G_TSTPAT_STATUS_JTP_ERR BIT(4) -#define PCS1G_TSTPAT_STATUS_JTP_LOCK BIT(0) - -#define DEV_PCS_FX100_CFG 0x94 - -#define DEV_PCS_FX100_CFG_SD_SEL BIT(26) -#define DEV_PCS_FX100_CFG_SD_POL BIT(25) -#define DEV_PCS_FX100_CFG_SD_ENA BIT(24) -#define DEV_PCS_FX100_CFG_LOOPBACK_ENA BIT(20) -#define DEV_PCS_FX100_CFG_SWAP_MII_ENA BIT(16) -#define DEV_PCS_FX100_CFG_RXBITSEL(x) (((x) << 12) & GENMASK(15, 12)) -#define DEV_PCS_FX100_CFG_RXBITSEL_M GENMASK(15, 12) -#define DEV_PCS_FX100_CFG_RXBITSEL_X(x) (((x) & GENMASK(15, 12)) >> 12) -#define DEV_PCS_FX100_CFG_SIGDET_CFG(x) (((x) << 9) & GENMASK(10, 9)) -#define DEV_PCS_FX100_CFG_SIGDET_CFG_M GENMASK(10, 9) -#define DEV_PCS_FX100_CFG_SIGDET_CFG_X(x) (((x) & GENMASK(10, 9)) >> 9) -#define DEV_PCS_FX100_CFG_LINKHYST_TM_ENA BIT(8) -#define DEV_PCS_FX100_CFG_LINKHYSTTIMER(x) (((x) << 4) & GENMASK(7, 4)) -#define DEV_PCS_FX100_CFG_LINKHYSTTIMER_M GENMASK(7, 4) -#define DEV_PCS_FX100_CFG_LINKHYSTTIMER_X(x) (((x) & GENMASK(7, 4)) >> 4) -#define DEV_PCS_FX100_CFG_UNIDIR_MODE_ENA BIT(3) -#define DEV_PCS_FX100_CFG_FEFCHK_ENA BIT(2) -#define DEV_PCS_FX100_CFG_FEFGEN_ENA BIT(1) -#define DEV_PCS_FX100_CFG_PCS_ENA BIT(0) - -#define DEV_PCS_FX100_STATUS 0x98 - -#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP(x) (((x) << 8) & GENMASK(11, 8)) -#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP_M GENMASK(11, 8) -#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP_X(x) (((x) & GENMASK(11, 8)) >> 8) -#define DEV_PCS_FX100_STATUS_PCS_ERROR_STICKY BIT(7) -#define DEV_PCS_FX100_STATUS_FEF_FOUND_STICKY BIT(6) -#define DEV_PCS_FX100_STATUS_SSD_ERROR_STICKY BIT(5) -#define DEV_PCS_FX100_STATUS_SYNC_LOST_STICKY BIT(4) -#define DEV_PCS_FX100_STATUS_FEF_STATUS BIT(2) -#define DEV_PCS_FX100_STATUS_SIGNAL_DETECT BIT(1) -#define DEV_PCS_FX100_STATUS_SYNC_STATUS BIT(0) - -#endif diff --git a/drivers/net/ethernet/mscc/ocelot_qsys.h b/drivers/net/ethernet/mscc/ocelot_qsys.h deleted file mode 100644 index d8c63aa761be..000000000000 --- a/drivers/net/ethernet/mscc/ocelot_qsys.h +++ /dev/null @@ -1,270 +0,0 @@ -/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ -/* - * Microsemi Ocelot Switch driver - * - * Copyright (c) 2017 Microsemi Corporation - */ - -#ifndef _MSCC_OCELOT_QSYS_H_ -#define _MSCC_OCELOT_QSYS_H_ - -#define QSYS_PORT_MODE_RSZ 0x4 - -#define QSYS_PORT_MODE_DEQUEUE_DIS BIT(1) -#define QSYS_PORT_MODE_DEQUEUE_LATE BIT(0) - -#define QSYS_SWITCH_PORT_MODE_RSZ 0x4 - -#define QSYS_SWITCH_PORT_MODE_PORT_ENA BIT(14) -#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG(x) (((x) << 11) & GENMASK(13, 11)) -#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG_M GENMASK(13, 11) -#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG_X(x) (((x) & GENMASK(13, 11)) >> 11) -#define QSYS_SWITCH_PORT_MODE_YEL_RSRVD BIT(10) -#define QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE BIT(9) -#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA(x) (((x) << 1) & GENMASK(8, 1)) -#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA_M GENMASK(8, 1) -#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA_X(x) (((x) & GENMASK(8, 1)) >> 1) -#define QSYS_SWITCH_PORT_MODE_TX_PFC_MODE BIT(0) - -#define QSYS_STAT_CNT_CFG_TX_GREEN_CNT_MODE BIT(5) -#define QSYS_STAT_CNT_CFG_TX_YELLOW_CNT_MODE BIT(4) -#define QSYS_STAT_CNT_CFG_DROP_GREEN_CNT_MODE BIT(3) -#define QSYS_STAT_CNT_CFG_DROP_YELLOW_CNT_MODE BIT(2) -#define QSYS_STAT_CNT_CFG_DROP_COUNT_ONCE BIT(1) -#define QSYS_STAT_CNT_CFG_DROP_COUNT_EGRESS BIT(0) - -#define QSYS_EEE_CFG_RSZ 0x4 - -#define QSYS_EEE_THRES_EEE_HIGH_BYTES(x) (((x) << 8) & GENMASK(15, 8)) -#define QSYS_EEE_THRES_EEE_HIGH_BYTES_M GENMASK(15, 8) -#define QSYS_EEE_THRES_EEE_HIGH_BYTES_X(x) (((x) & GENMASK(15, 8)) >> 8) -#define QSYS_EEE_THRES_EEE_HIGH_FRAMES(x) ((x) & GENMASK(7, 0)) -#define QSYS_EEE_THRES_EEE_HIGH_FRAMES_M GENMASK(7, 0) - -#define QSYS_SW_STATUS_RSZ 0x4 - -#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT(x) (((x) << 8) & GENMASK(12, 8)) -#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT_M GENMASK(12, 8) -#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT_X(x) (((x) & GENMASK(12, 8)) >> 8) -#define QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK(x) ((x) & GENMASK(7, 0)) -#define QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M GENMASK(7, 0) - -#define QSYS_QMAP_GSZ 0x4 - -#define QSYS_QMAP_SE_BASE(x) (((x) << 5) & GENMASK(12, 5)) -#define QSYS_QMAP_SE_BASE_M GENMASK(12, 5) -#define QSYS_QMAP_SE_BASE_X(x) (((x) & GENMASK(12, 5)) >> 5) -#define QSYS_QMAP_SE_IDX_SEL(x) (((x) << 2) & GENMASK(4, 2)) -#define QSYS_QMAP_SE_IDX_SEL_M GENMASK(4, 2) -#define QSYS_QMAP_SE_IDX_SEL_X(x) (((x) & GENMASK(4, 2)) >> 2) -#define QSYS_QMAP_SE_INP_SEL(x) ((x) & GENMASK(1, 0)) -#define QSYS_QMAP_SE_INP_SEL_M GENMASK(1, 0) - -#define QSYS_ISDX_SGRP_GSZ 0x4 - -#define QSYS_TIMED_FRAME_ENTRY_GSZ 0x4 - -#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT(x) (((x) << 9) & GENMASK(18, 9)) -#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT_M GENMASK(18, 9) -#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT_X(x) (((x) & GENMASK(18, 9)) >> 9) -#define QSYS_TFRM_MISC_TIMED_CANCEL_1SHOT BIT(8) -#define QSYS_TFRM_MISC_TIMED_SLOT_MODE_MC BIT(7) -#define QSYS_TFRM_MISC_TIMED_ENTRY_FAST_CNT(x) ((x) & GENMASK(6, 0)) -#define QSYS_TFRM_MISC_TIMED_ENTRY_FAST_CNT_M GENMASK(6, 0) - -#define QSYS_RED_PROFILE_RSZ 0x4 - -#define QSYS_RED_PROFILE_WM_RED_LOW(x) (((x) << 8) & GENMASK(15, 8)) -#define QSYS_RED_PROFILE_WM_RED_LOW_M GENMASK(15, 8) -#define QSYS_RED_PROFILE_WM_RED_LOW_X(x) (((x) & GENMASK(15, 8)) >> 8) -#define QSYS_RED_PROFILE_WM_RED_HIGH(x) ((x) & GENMASK(7, 0)) -#define QSYS_RED_PROFILE_WM_RED_HIGH_M GENMASK(7, 0) - -#define QSYS_RES_CFG_GSZ 0x8 - -#define QSYS_RES_STAT_GSZ 0x8 - -#define QSYS_RES_STAT_INUSE(x) (((x) << 12) & GENMASK(23, 12)) -#define QSYS_RES_STAT_INUSE_M GENMASK(23, 12) -#define QSYS_RES_STAT_INUSE_X(x) (((x) & GENMASK(23, 12)) >> 12) -#define QSYS_RES_STAT_MAXUSE(x) ((x) & GENMASK(11, 0)) -#define QSYS_RES_STAT_MAXUSE_M GENMASK(11, 0) - -#define QSYS_EVENTS_CORE_EV_FDC(x) (((x) << 2) & GENMASK(4, 2)) -#define QSYS_EVENTS_CORE_EV_FDC_M GENMASK(4, 2) -#define QSYS_EVENTS_CORE_EV_FDC_X(x) (((x) & GENMASK(4, 2)) >> 2) -#define QSYS_EVENTS_CORE_EV_FRD(x) ((x) & GENMASK(1, 0)) -#define QSYS_EVENTS_CORE_EV_FRD_M GENMASK(1, 0) - -#define QSYS_QMAXSDU_CFG_0_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_1_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_2_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_3_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_4_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_5_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_6_RSZ 0x4 - -#define QSYS_QMAXSDU_CFG_7_RSZ 0x4 - -#define QSYS_PREEMPTION_CFG_RSZ 0x4 - -#define QSYS_PREEMPTION_CFG_P_QUEUES(x) ((x) & GENMASK(7, 0)) -#define QSYS_PREEMPTION_CFG_P_QUEUES_M GENMASK(7, 0) -#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE(x) (((x) << 8) & GENMASK(9, 8)) -#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_M GENMASK(9, 8) -#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(x) (((x) & GENMASK(9, 8)) >> 8) -#define QSYS_PREEMPTION_CFG_STRICT_IPG(x) (((x) << 12) & GENMASK(13, 12)) -#define QSYS_PREEMPTION_CFG_STRICT_IPG_M GENMASK(13, 12) -#define QSYS_PREEMPTION_CFG_STRICT_IPG_X(x) (((x) & GENMASK(13, 12)) >> 12) -#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE(x) (((x) << 16) & GENMASK(31, 16)) -#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE_M GENMASK(31, 16) -#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE_X(x) (((x) & GENMASK(31, 16)) >> 16) - -#define QSYS_CIR_CFG_GSZ 0x80 - -#define QSYS_CIR_CFG_CIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) -#define QSYS_CIR_CFG_CIR_RATE_M GENMASK(20, 6) -#define QSYS_CIR_CFG_CIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) -#define QSYS_CIR_CFG_CIR_BURST(x) ((x) & GENMASK(5, 0)) -#define QSYS_CIR_CFG_CIR_BURST_M GENMASK(5, 0) - -#define QSYS_EIR_CFG_GSZ 0x80 - -#define QSYS_EIR_CFG_EIR_RATE(x) (((x) << 7) & GENMASK(21, 7)) -#define QSYS_EIR_CFG_EIR_RATE_M GENMASK(21, 7) -#define QSYS_EIR_CFG_EIR_RATE_X(x) (((x) & GENMASK(21, 7)) >> 7) -#define QSYS_EIR_CFG_EIR_BURST(x) (((x) << 1) & GENMASK(6, 1)) -#define QSYS_EIR_CFG_EIR_BURST_M GENMASK(6, 1) -#define QSYS_EIR_CFG_EIR_BURST_X(x) (((x) & GENMASK(6, 1)) >> 1) -#define QSYS_EIR_CFG_EIR_MARK_ENA BIT(0) - -#define QSYS_SE_CFG_GSZ 0x80 - -#define QSYS_SE_CFG_SE_DWRR_CNT(x) (((x) << 6) & GENMASK(9, 6)) -#define QSYS_SE_CFG_SE_DWRR_CNT_M GENMASK(9, 6) -#define QSYS_SE_CFG_SE_DWRR_CNT_X(x) (((x) & GENMASK(9, 6)) >> 6) -#define QSYS_SE_CFG_SE_RR_ENA BIT(5) -#define QSYS_SE_CFG_SE_AVB_ENA BIT(4) -#define QSYS_SE_CFG_SE_FRM_MODE(x) (((x) << 2) & GENMASK(3, 2)) -#define QSYS_SE_CFG_SE_FRM_MODE_M GENMASK(3, 2) -#define QSYS_SE_CFG_SE_FRM_MODE_X(x) (((x) & GENMASK(3, 2)) >> 2) -#define QSYS_SE_CFG_SE_EXC_ENA BIT(1) -#define QSYS_SE_CFG_SE_EXC_FWD BIT(0) - -#define QSYS_SE_DWRR_CFG_GSZ 0x80 -#define QSYS_SE_DWRR_CFG_RSZ 0x4 - -#define QSYS_SE_CONNECT_GSZ 0x80 - -#define QSYS_SE_CONNECT_SE_OUTP_IDX(x) (((x) << 17) & GENMASK(24, 17)) -#define QSYS_SE_CONNECT_SE_OUTP_IDX_M GENMASK(24, 17) -#define QSYS_SE_CONNECT_SE_OUTP_IDX_X(x) (((x) & GENMASK(24, 17)) >> 17) -#define QSYS_SE_CONNECT_SE_INP_IDX(x) (((x) << 9) & GENMASK(16, 9)) -#define QSYS_SE_CONNECT_SE_INP_IDX_M GENMASK(16, 9) -#define QSYS_SE_CONNECT_SE_INP_IDX_X(x) (((x) & GENMASK(16, 9)) >> 9) -#define QSYS_SE_CONNECT_SE_OUTP_CON(x) (((x) << 5) & GENMASK(8, 5)) -#define QSYS_SE_CONNECT_SE_OUTP_CON_M GENMASK(8, 5) -#define QSYS_SE_CONNECT_SE_OUTP_CON_X(x) (((x) & GENMASK(8, 5)) >> 5) -#define QSYS_SE_CONNECT_SE_INP_CNT(x) (((x) << 1) & GENMASK(4, 1)) -#define QSYS_SE_CONNECT_SE_INP_CNT_M GENMASK(4, 1) -#define QSYS_SE_CONNECT_SE_INP_CNT_X(x) (((x) & GENMASK(4, 1)) >> 1) -#define QSYS_SE_CONNECT_SE_TERMINAL BIT(0) - -#define QSYS_SE_DLB_SENSE_GSZ 0x80 - -#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO(x) (((x) << 11) & GENMASK(13, 11)) -#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_M GENMASK(13, 11) -#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_X(x) (((x) & GENMASK(13, 11)) >> 11) -#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT(x) (((x) << 7) & GENMASK(10, 7)) -#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_M GENMASK(10, 7) -#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_X(x) (((x) & GENMASK(10, 7)) >> 7) -#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT(x) (((x) << 3) & GENMASK(6, 3)) -#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_M GENMASK(6, 3) -#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_X(x) (((x) & GENMASK(6, 3)) >> 3) -#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_ENA BIT(2) -#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_ENA BIT(1) -#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_ENA BIT(0) - -#define QSYS_CIR_STATE_GSZ 0x80 - -#define QSYS_CIR_STATE_CIR_LVL(x) (((x) << 4) & GENMASK(25, 4)) -#define QSYS_CIR_STATE_CIR_LVL_M GENMASK(25, 4) -#define QSYS_CIR_STATE_CIR_LVL_X(x) (((x) & GENMASK(25, 4)) >> 4) -#define QSYS_CIR_STATE_SHP_TIME(x) ((x) & GENMASK(3, 0)) -#define QSYS_CIR_STATE_SHP_TIME_M GENMASK(3, 0) - -#define QSYS_EIR_STATE_GSZ 0x80 - -#define QSYS_SE_STATE_GSZ 0x80 - -#define QSYS_SE_STATE_SE_OUTP_LVL(x) (((x) << 1) & GENMASK(2, 1)) -#define QSYS_SE_STATE_SE_OUTP_LVL_M GENMASK(2, 1) -#define QSYS_SE_STATE_SE_OUTP_LVL_X(x) (((x) & GENMASK(2, 1)) >> 1) -#define QSYS_SE_STATE_SE_WAS_YEL BIT(0) - -#define QSYS_HSCH_MISC_CFG_SE_CONNECT_VLD BIT(8) -#define QSYS_HSCH_MISC_CFG_FRM_ADJ(x) (((x) << 3) & GENMASK(7, 3)) -#define QSYS_HSCH_MISC_CFG_FRM_ADJ_M GENMASK(7, 3) -#define QSYS_HSCH_MISC_CFG_FRM_ADJ_X(x) (((x) & GENMASK(7, 3)) >> 3) -#define QSYS_HSCH_MISC_CFG_LEAK_DIS BIT(2) -#define QSYS_HSCH_MISC_CFG_QSHP_EXC_ENA BIT(1) -#define QSYS_HSCH_MISC_CFG_PFC_BYP_UPD BIT(0) - -#define QSYS_TAG_CONFIG_RSZ 0x4 - -#define QSYS_TAG_CONFIG_ENABLE BIT(0) -#define QSYS_TAG_CONFIG_LINK_SPEED(x) (((x) << 4) & GENMASK(5, 4)) -#define QSYS_TAG_CONFIG_LINK_SPEED_M GENMASK(5, 4) -#define QSYS_TAG_CONFIG_LINK_SPEED_X(x) (((x) & GENMASK(5, 4)) >> 4) -#define QSYS_TAG_CONFIG_INIT_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) -#define QSYS_TAG_CONFIG_INIT_GATE_STATE_M GENMASK(15, 8) -#define QSYS_TAG_CONFIG_INIT_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) -#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES(x) (((x) << 16) & GENMASK(23, 16)) -#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES_M GENMASK(23, 16) -#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES_X(x) (((x) & GENMASK(23, 16)) >> 16) - -#define QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(x) ((x) & GENMASK(7, 0)) -#define QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M GENMASK(7, 0) -#define QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q BIT(8) -#define QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE BIT(16) - -#define QSYS_PORT_MAX_SDU_RSZ 0x4 - -#define QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) -#define QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) -#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(31, 16)) -#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH_M GENMASK(31, 16) -#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(31, 16)) >> 16) - -#define QSYS_GCL_CFG_REG_1_GCL_ENTRY_NUM(x) ((x) & GENMASK(5, 0)) -#define QSYS_GCL_CFG_REG_1_GCL_ENTRY_NUM_M GENMASK(5, 0) -#define QSYS_GCL_CFG_REG_1_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) -#define QSYS_GCL_CFG_REG_1_GATE_STATE_M GENMASK(15, 8) -#define QSYS_GCL_CFG_REG_1_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) - -#define QSYS_PARAM_STATUS_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) -#define QSYS_PARAM_STATUS_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) -#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(31, 16)) -#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH_M GENMASK(31, 16) -#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(31, 16)) >> 16) - -#define QSYS_PARAM_STATUS_REG_8_CFG_CHG_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) -#define QSYS_PARAM_STATUS_REG_8_CFG_CHG_TIME_SEC_MSB_M GENMASK(15, 0) -#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE(x) (((x) << 16) & GENMASK(23, 16)) -#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE_M GENMASK(23, 16) -#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE_X(x) (((x) & GENMASK(23, 16)) >> 16) -#define QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING BIT(24) - -#define QSYS_GCL_STATUS_REG_1_GCL_ENTRY_NUM(x) ((x) & GENMASK(5, 0)) -#define QSYS_GCL_STATUS_REG_1_GCL_ENTRY_NUM_M GENMASK(5, 0) -#define QSYS_GCL_STATUS_REG_1_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) -#define QSYS_GCL_STATUS_REG_1_GATE_STATE_M GENMASK(15, 8) -#define QSYS_GCL_STATUS_REG_1_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) - -#endif diff --git a/include/soc/mscc/ocelot_ana.h b/include/soc/mscc/ocelot_ana.h new file mode 100644 index 000000000000..841c6ec22b64 --- /dev/null +++ b/include/soc/mscc/ocelot_ana.h @@ -0,0 +1,625 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ +/* + * Microsemi Ocelot Switch driver + * + * Copyright (c) 2017 Microsemi Corporation + */ + +#ifndef _MSCC_OCELOT_ANA_H_ +#define _MSCC_OCELOT_ANA_H_ + +#define ANA_ANAGEFIL_B_DOM_EN BIT(22) +#define ANA_ANAGEFIL_B_DOM_VAL BIT(21) +#define ANA_ANAGEFIL_AGE_LOCKED BIT(20) +#define ANA_ANAGEFIL_PID_EN BIT(19) +#define ANA_ANAGEFIL_PID_VAL(x) (((x) << 14) & GENMASK(18, 14)) +#define ANA_ANAGEFIL_PID_VAL_M GENMASK(18, 14) +#define ANA_ANAGEFIL_PID_VAL_X(x) (((x) & GENMASK(18, 14)) >> 14) +#define ANA_ANAGEFIL_VID_EN BIT(13) +#define ANA_ANAGEFIL_VID_VAL(x) ((x) & GENMASK(12, 0)) +#define ANA_ANAGEFIL_VID_VAL_M GENMASK(12, 0) + +#define ANA_STORMLIMIT_CFG_RSZ 0x4 + +#define ANA_STORMLIMIT_CFG_STORM_RATE(x) (((x) << 3) & GENMASK(6, 3)) +#define ANA_STORMLIMIT_CFG_STORM_RATE_M GENMASK(6, 3) +#define ANA_STORMLIMIT_CFG_STORM_RATE_X(x) (((x) & GENMASK(6, 3)) >> 3) +#define ANA_STORMLIMIT_CFG_STORM_UNIT BIT(2) +#define ANA_STORMLIMIT_CFG_STORM_MODE(x) ((x) & GENMASK(1, 0)) +#define ANA_STORMLIMIT_CFG_STORM_MODE_M GENMASK(1, 0) + +#define ANA_AUTOAGE_AGE_FAST BIT(21) +#define ANA_AUTOAGE_AGE_PERIOD(x) (((x) << 1) & GENMASK(20, 1)) +#define ANA_AUTOAGE_AGE_PERIOD_M GENMASK(20, 1) +#define ANA_AUTOAGE_AGE_PERIOD_X(x) (((x) & GENMASK(20, 1)) >> 1) +#define ANA_AUTOAGE_AUTOAGE_LOCKED BIT(0) + +#define ANA_MACTOPTIONS_REDUCED_TABLE BIT(1) +#define ANA_MACTOPTIONS_SHADOW BIT(0) + +#define ANA_AGENCTRL_FID_MASK(x) (((x) << 12) & GENMASK(23, 12)) +#define ANA_AGENCTRL_FID_MASK_M GENMASK(23, 12) +#define ANA_AGENCTRL_FID_MASK_X(x) (((x) & GENMASK(23, 12)) >> 12) +#define ANA_AGENCTRL_IGNORE_DMAC_FLAGS BIT(11) +#define ANA_AGENCTRL_IGNORE_SMAC_FLAGS BIT(10) +#define ANA_AGENCTRL_FLOOD_SPECIAL BIT(9) +#define ANA_AGENCTRL_FLOOD_IGNORE_VLAN BIT(8) +#define ANA_AGENCTRL_MIRROR_CPU BIT(7) +#define ANA_AGENCTRL_LEARN_CPU_COPY BIT(6) +#define ANA_AGENCTRL_LEARN_FWD_KILL BIT(5) +#define ANA_AGENCTRL_LEARN_IGNORE_VLAN BIT(4) +#define ANA_AGENCTRL_CPU_CPU_KILL_ENA BIT(3) +#define ANA_AGENCTRL_GREEN_COUNT_MODE BIT(2) +#define ANA_AGENCTRL_YELLOW_COUNT_MODE BIT(1) +#define ANA_AGENCTRL_RED_COUNT_MODE BIT(0) + +#define ANA_FLOODING_RSZ 0x4 + +#define ANA_FLOODING_FLD_UNICAST(x) (((x) << 12) & GENMASK(17, 12)) +#define ANA_FLOODING_FLD_UNICAST_M GENMASK(17, 12) +#define ANA_FLOODING_FLD_UNICAST_X(x) (((x) & GENMASK(17, 12)) >> 12) +#define ANA_FLOODING_FLD_BROADCAST(x) (((x) << 6) & GENMASK(11, 6)) +#define ANA_FLOODING_FLD_BROADCAST_M GENMASK(11, 6) +#define ANA_FLOODING_FLD_BROADCAST_X(x) (((x) & GENMASK(11, 6)) >> 6) +#define ANA_FLOODING_FLD_MULTICAST(x) ((x) & GENMASK(5, 0)) +#define ANA_FLOODING_FLD_MULTICAST_M GENMASK(5, 0) + +#define ANA_FLOODING_IPMC_FLD_MC4_CTRL(x) (((x) << 18) & GENMASK(23, 18)) +#define ANA_FLOODING_IPMC_FLD_MC4_CTRL_M GENMASK(23, 18) +#define ANA_FLOODING_IPMC_FLD_MC4_CTRL_X(x) (((x) & GENMASK(23, 18)) >> 18) +#define ANA_FLOODING_IPMC_FLD_MC4_DATA(x) (((x) << 12) & GENMASK(17, 12)) +#define ANA_FLOODING_IPMC_FLD_MC4_DATA_M GENMASK(17, 12) +#define ANA_FLOODING_IPMC_FLD_MC4_DATA_X(x) (((x) & GENMASK(17, 12)) >> 12) +#define ANA_FLOODING_IPMC_FLD_MC6_CTRL(x) (((x) << 6) & GENMASK(11, 6)) +#define ANA_FLOODING_IPMC_FLD_MC6_CTRL_M GENMASK(11, 6) +#define ANA_FLOODING_IPMC_FLD_MC6_CTRL_X(x) (((x) & GENMASK(11, 6)) >> 6) +#define ANA_FLOODING_IPMC_FLD_MC6_DATA(x) ((x) & GENMASK(5, 0)) +#define ANA_FLOODING_IPMC_FLD_MC6_DATA_M GENMASK(5, 0) + +#define ANA_SFLOW_CFG_RSZ 0x4 + +#define ANA_SFLOW_CFG_SF_RATE(x) (((x) << 2) & GENMASK(13, 2)) +#define ANA_SFLOW_CFG_SF_RATE_M GENMASK(13, 2) +#define ANA_SFLOW_CFG_SF_RATE_X(x) (((x) & GENMASK(13, 2)) >> 2) +#define ANA_SFLOW_CFG_SF_SAMPLE_RX BIT(1) +#define ANA_SFLOW_CFG_SF_SAMPLE_TX BIT(0) + +#define ANA_PORT_MODE_RSZ 0x4 + +#define ANA_PORT_MODE_REDTAG_PARSE_CFG BIT(3) +#define ANA_PORT_MODE_VLAN_PARSE_CFG(x) (((x) << 1) & GENMASK(2, 1)) +#define ANA_PORT_MODE_VLAN_PARSE_CFG_M GENMASK(2, 1) +#define ANA_PORT_MODE_VLAN_PARSE_CFG_X(x) (((x) & GENMASK(2, 1)) >> 1) +#define ANA_PORT_MODE_L3_PARSE_CFG BIT(0) + +#define ANA_CUT_THRU_CFG_RSZ 0x4 + +#define ANA_PGID_PGID_RSZ 0x4 + +#define ANA_PGID_PGID_PGID(x) ((x) & GENMASK(11, 0)) +#define ANA_PGID_PGID_PGID_M GENMASK(11, 0) +#define ANA_PGID_PGID_CPUQ_DST_PGID(x) (((x) << 27) & GENMASK(29, 27)) +#define ANA_PGID_PGID_CPUQ_DST_PGID_M GENMASK(29, 27) +#define ANA_PGID_PGID_CPUQ_DST_PGID_X(x) (((x) & GENMASK(29, 27)) >> 27) + +#define ANA_TABLES_MACHDATA_VID(x) (((x) << 16) & GENMASK(28, 16)) +#define ANA_TABLES_MACHDATA_VID_M GENMASK(28, 16) +#define ANA_TABLES_MACHDATA_VID_X(x) (((x) & GENMASK(28, 16)) >> 16) +#define ANA_TABLES_MACHDATA_MACHDATA(x) ((x) & GENMASK(15, 0)) +#define ANA_TABLES_MACHDATA_MACHDATA_M GENMASK(15, 0) + +#define ANA_TABLES_STREAMDATA_SSID_VALID BIT(16) +#define ANA_TABLES_STREAMDATA_SSID(x) (((x) << 9) & GENMASK(15, 9)) +#define ANA_TABLES_STREAMDATA_SSID_M GENMASK(15, 9) +#define ANA_TABLES_STREAMDATA_SSID_X(x) (((x) & GENMASK(15, 9)) >> 9) +#define ANA_TABLES_STREAMDATA_SFID_VALID BIT(8) +#define ANA_TABLES_STREAMDATA_SFID(x) ((x) & GENMASK(7, 0)) +#define ANA_TABLES_STREAMDATA_SFID_M GENMASK(7, 0) + +#define ANA_TABLES_MACACCESS_MAC_CPU_COPY BIT(15) +#define ANA_TABLES_MACACCESS_SRC_KILL BIT(14) +#define ANA_TABLES_MACACCESS_IGNORE_VLAN BIT(13) +#define ANA_TABLES_MACACCESS_AGED_FLAG BIT(12) +#define ANA_TABLES_MACACCESS_VALID BIT(11) +#define ANA_TABLES_MACACCESS_ENTRYTYPE(x) (((x) << 9) & GENMASK(10, 9)) +#define ANA_TABLES_MACACCESS_ENTRYTYPE_M GENMASK(10, 9) +#define ANA_TABLES_MACACCESS_ENTRYTYPE_X(x) (((x) & GENMASK(10, 9)) >> 9) +#define ANA_TABLES_MACACCESS_DEST_IDX(x) (((x) << 3) & GENMASK(8, 3)) +#define ANA_TABLES_MACACCESS_DEST_IDX_M GENMASK(8, 3) +#define ANA_TABLES_MACACCESS_DEST_IDX_X(x) (((x) & GENMASK(8, 3)) >> 3) +#define ANA_TABLES_MACACCESS_MAC_TABLE_CMD(x) ((x) & GENMASK(2, 0)) +#define ANA_TABLES_MACACCESS_MAC_TABLE_CMD_M GENMASK(2, 0) +#define MACACCESS_CMD_IDLE 0 +#define MACACCESS_CMD_LEARN 1 +#define MACACCESS_CMD_FORGET 2 +#define MACACCESS_CMD_AGE 3 +#define MACACCESS_CMD_GET_NEXT 4 +#define MACACCESS_CMD_INIT 5 +#define MACACCESS_CMD_READ 6 +#define MACACCESS_CMD_WRITE 7 + +#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK(x) (((x) << 2) & GENMASK(13, 2)) +#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK_M GENMASK(13, 2) +#define ANA_TABLES_VLANACCESS_VLAN_PORT_MASK_X(x) (((x) & GENMASK(13, 2)) >> 2) +#define ANA_TABLES_VLANACCESS_VLAN_TBL_CMD(x) ((x) & GENMASK(1, 0)) +#define ANA_TABLES_VLANACCESS_VLAN_TBL_CMD_M GENMASK(1, 0) +#define ANA_TABLES_VLANACCESS_CMD_IDLE 0x0 +#define ANA_TABLES_VLANACCESS_CMD_WRITE 0x2 +#define ANA_TABLES_VLANACCESS_CMD_INIT 0x3 + +#define ANA_TABLES_VLANTIDX_VLAN_SEC_FWD_ENA BIT(17) +#define ANA_TABLES_VLANTIDX_VLAN_FLOOD_DIS BIT(16) +#define ANA_TABLES_VLANTIDX_VLAN_PRIV_VLAN BIT(15) +#define ANA_TABLES_VLANTIDX_VLAN_LEARN_DISABLED BIT(14) +#define ANA_TABLES_VLANTIDX_VLAN_MIRROR BIT(13) +#define ANA_TABLES_VLANTIDX_VLAN_SRC_CHK BIT(12) +#define ANA_TABLES_VLANTIDX_V_INDEX(x) ((x) & GENMASK(11, 0)) +#define ANA_TABLES_VLANTIDX_V_INDEX_M GENMASK(11, 0) + +#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK(x) (((x) << 2) & GENMASK(8, 2)) +#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK_M GENMASK(8, 2) +#define ANA_TABLES_ISDXACCESS_ISDX_PORT_MASK_X(x) (((x) & GENMASK(8, 2)) >> 2) +#define ANA_TABLES_ISDXACCESS_ISDX_TBL_CMD(x) ((x) & GENMASK(1, 0)) +#define ANA_TABLES_ISDXACCESS_ISDX_TBL_CMD_M GENMASK(1, 0) + +#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI(x) (((x) << 21) & GENMASK(28, 21)) +#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI_M GENMASK(28, 21) +#define ANA_TABLES_ISDXTIDX_ISDX_SDLBI_X(x) (((x) & GENMASK(28, 21)) >> 21) +#define ANA_TABLES_ISDXTIDX_ISDX_MSTI(x) (((x) << 15) & GENMASK(20, 15)) +#define ANA_TABLES_ISDXTIDX_ISDX_MSTI_M GENMASK(20, 15) +#define ANA_TABLES_ISDXTIDX_ISDX_MSTI_X(x) (((x) & GENMASK(20, 15)) >> 15) +#define ANA_TABLES_ISDXTIDX_ISDX_ES0_KEY_ENA BIT(14) +#define ANA_TABLES_ISDXTIDX_ISDX_FORCE_ENA BIT(10) +#define ANA_TABLES_ISDXTIDX_ISDX_INDEX(x) ((x) & GENMASK(7, 0)) +#define ANA_TABLES_ISDXTIDX_ISDX_INDEX_M GENMASK(7, 0) + +#define ANA_TABLES_ENTRYLIM_RSZ 0x4 + +#define ANA_TABLES_ENTRYLIM_ENTRYLIM(x) (((x) << 14) & GENMASK(17, 14)) +#define ANA_TABLES_ENTRYLIM_ENTRYLIM_M GENMASK(17, 14) +#define ANA_TABLES_ENTRYLIM_ENTRYLIM_X(x) (((x) & GENMASK(17, 14)) >> 14) +#define ANA_TABLES_ENTRYLIM_ENTRYSTAT(x) ((x) & GENMASK(13, 0)) +#define ANA_TABLES_ENTRYLIM_ENTRYSTAT_M GENMASK(13, 0) + +#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM(x) (((x) << 4) & GENMASK(31, 4)) +#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM_M GENMASK(31, 4) +#define ANA_TABLES_STREAMACCESS_GEN_REC_SEQ_NUM_X(x) (((x) & GENMASK(31, 4)) >> 4) +#define ANA_TABLES_STREAMACCESS_SEQ_GEN_REC_ENA BIT(3) +#define ANA_TABLES_STREAMACCESS_GEN_REC_TYPE BIT(2) +#define ANA_TABLES_STREAMACCESS_STREAM_TBL_CMD(x) ((x) & GENMASK(1, 0)) +#define ANA_TABLES_STREAMACCESS_STREAM_TBL_CMD_M GENMASK(1, 0) + +#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS(x) (((x) << 30) & GENMASK(31, 30)) +#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS_M GENMASK(31, 30) +#define ANA_TABLES_STREAMTIDX_SEQ_GEN_ERR_STATUS_X(x) (((x) & GENMASK(31, 30)) >> 30) +#define ANA_TABLES_STREAMTIDX_S_INDEX(x) (((x) << 16) & GENMASK(22, 16)) +#define ANA_TABLES_STREAMTIDX_S_INDEX_M GENMASK(22, 16) +#define ANA_TABLES_STREAMTIDX_S_INDEX_X(x) (((x) & GENMASK(22, 16)) >> 16) +#define ANA_TABLES_STREAMTIDX_FORCE_SF_BEHAVIOUR BIT(14) +#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN(x) (((x) << 8) & GENMASK(13, 8)) +#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN_M GENMASK(13, 8) +#define ANA_TABLES_STREAMTIDX_SEQ_HISTORY_LEN_X(x) (((x) & GENMASK(13, 8)) >> 8) +#define ANA_TABLES_STREAMTIDX_RESET_ON_ROGUE BIT(7) +#define ANA_TABLES_STREAMTIDX_REDTAG_POP BIT(6) +#define ANA_TABLES_STREAMTIDX_STREAM_SPLIT BIT(5) +#define ANA_TABLES_STREAMTIDX_SEQ_SPACE_LOG2(x) ((x) & GENMASK(4, 0)) +#define ANA_TABLES_STREAMTIDX_SEQ_SPACE_LOG2_M GENMASK(4, 0) + +#define ANA_TABLES_SEQ_MASK_SPLIT_MASK(x) (((x) << 16) & GENMASK(22, 16)) +#define ANA_TABLES_SEQ_MASK_SPLIT_MASK_M GENMASK(22, 16) +#define ANA_TABLES_SEQ_MASK_SPLIT_MASK_X(x) (((x) & GENMASK(22, 16)) >> 16) +#define ANA_TABLES_SEQ_MASK_INPUT_PORT_MASK(x) ((x) & GENMASK(6, 0)) +#define ANA_TABLES_SEQ_MASK_INPUT_PORT_MASK_M GENMASK(6, 0) + +#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK(x) (((x) << 1) & GENMASK(7, 1)) +#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK_M GENMASK(7, 1) +#define ANA_TABLES_SFID_MASK_IGR_PORT_MASK_X(x) (((x) & GENMASK(7, 1)) >> 1) +#define ANA_TABLES_SFID_MASK_IGR_SRCPORT_MATCH_ENA BIT(0) + +#define ANA_TABLES_SFIDACCESS_IGR_PRIO_MATCH_ENA BIT(22) +#define ANA_TABLES_SFIDACCESS_IGR_PRIO(x) (((x) << 19) & GENMASK(21, 19)) +#define ANA_TABLES_SFIDACCESS_IGR_PRIO_M GENMASK(21, 19) +#define ANA_TABLES_SFIDACCESS_IGR_PRIO_X(x) (((x) & GENMASK(21, 19)) >> 19) +#define ANA_TABLES_SFIDACCESS_FORCE_BLOCK BIT(18) +#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN(x) (((x) << 2) & GENMASK(17, 2)) +#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN_M GENMASK(17, 2) +#define ANA_TABLES_SFIDACCESS_MAX_SDU_LEN_X(x) (((x) & GENMASK(17, 2)) >> 2) +#define ANA_TABLES_SFIDACCESS_SFID_TBL_CMD(x) ((x) & GENMASK(1, 0)) +#define ANA_TABLES_SFIDACCESS_SFID_TBL_CMD_M GENMASK(1, 0) + +#define ANA_TABLES_SFIDTIDX_SGID_VALID BIT(26) +#define ANA_TABLES_SFIDTIDX_SGID(x) (((x) << 18) & GENMASK(25, 18)) +#define ANA_TABLES_SFIDTIDX_SGID_M GENMASK(25, 18) +#define ANA_TABLES_SFIDTIDX_SGID_X(x) (((x) & GENMASK(25, 18)) >> 18) +#define ANA_TABLES_SFIDTIDX_POL_ENA BIT(17) +#define ANA_TABLES_SFIDTIDX_POL_IDX(x) (((x) << 8) & GENMASK(16, 8)) +#define ANA_TABLES_SFIDTIDX_POL_IDX_M GENMASK(16, 8) +#define ANA_TABLES_SFIDTIDX_POL_IDX_X(x) (((x) & GENMASK(16, 8)) >> 8) +#define ANA_TABLES_SFIDTIDX_SFID_INDEX(x) ((x) & GENMASK(7, 0)) +#define ANA_TABLES_SFIDTIDX_SFID_INDEX_M GENMASK(7, 0) + +#define ANA_MSTI_STATE_RSZ 0x4 + +#define ANA_OAM_UPM_LM_CNT_RSZ 0x4 + +#define ANA_SG_ACCESS_CTRL_SGID(x) ((x) & GENMASK(7, 0)) +#define ANA_SG_ACCESS_CTRL_SGID_M GENMASK(7, 0) +#define ANA_SG_ACCESS_CTRL_CONFIG_CHANGE BIT(28) + +#define ANA_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) +#define ANA_SG_CONFIG_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) +#define ANA_SG_CONFIG_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(18, 16)) +#define ANA_SG_CONFIG_REG_3_LIST_LENGTH_M GENMASK(18, 16) +#define ANA_SG_CONFIG_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(18, 16)) >> 16) +#define ANA_SG_CONFIG_REG_3_GATE_ENABLE BIT(20) +#define ANA_SG_CONFIG_REG_3_INIT_IPS(x) (((x) << 24) & GENMASK(27, 24)) +#define ANA_SG_CONFIG_REG_3_INIT_IPS_M GENMASK(27, 24) +#define ANA_SG_CONFIG_REG_3_INIT_IPS_X(x) (((x) & GENMASK(27, 24)) >> 24) +#define ANA_SG_CONFIG_REG_3_INIT_GATE_STATE BIT(28) + +#define ANA_SG_GCL_GS_CONFIG_RSZ 0x4 + +#define ANA_SG_GCL_GS_CONFIG_IPS(x) ((x) & GENMASK(3, 0)) +#define ANA_SG_GCL_GS_CONFIG_IPS_M GENMASK(3, 0) +#define ANA_SG_GCL_GS_CONFIG_GATE_STATE BIT(4) + +#define ANA_SG_GCL_TI_CONFIG_RSZ 0x4 + +#define ANA_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) +#define ANA_SG_STATUS_REG_3_CFG_CHG_TIME_SEC_MSB_M GENMASK(15, 0) +#define ANA_SG_STATUS_REG_3_GATE_STATE BIT(16) +#define ANA_SG_STATUS_REG_3_IPS(x) (((x) << 20) & GENMASK(23, 20)) +#define ANA_SG_STATUS_REG_3_IPS_M GENMASK(23, 20) +#define ANA_SG_STATUS_REG_3_IPS_X(x) (((x) & GENMASK(23, 20)) >> 20) +#define ANA_SG_STATUS_REG_3_CONFIG_PENDING BIT(24) + +#define ANA_PORT_VLAN_CFG_GSZ 0x100 + +#define ANA_PORT_VLAN_CFG_VLAN_VID_AS_ISDX BIT(21) +#define ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA BIT(20) +#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT(x) (((x) << 18) & GENMASK(19, 18)) +#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M GENMASK(19, 18) +#define ANA_PORT_VLAN_CFG_VLAN_POP_CNT_X(x) (((x) & GENMASK(19, 18)) >> 18) +#define ANA_PORT_VLAN_CFG_VLAN_INNER_TAG_ENA BIT(17) +#define ANA_PORT_VLAN_CFG_VLAN_TAG_TYPE BIT(16) +#define ANA_PORT_VLAN_CFG_VLAN_DEI BIT(15) +#define ANA_PORT_VLAN_CFG_VLAN_PCP(x) (((x) << 12) & GENMASK(14, 12)) +#define ANA_PORT_VLAN_CFG_VLAN_PCP_M GENMASK(14, 12) +#define ANA_PORT_VLAN_CFG_VLAN_PCP_X(x) (((x) & GENMASK(14, 12)) >> 12) +#define ANA_PORT_VLAN_CFG_VLAN_VID(x) ((x) & GENMASK(11, 0)) +#define ANA_PORT_VLAN_CFG_VLAN_VID_M GENMASK(11, 0) + +#define ANA_PORT_DROP_CFG_GSZ 0x100 + +#define ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA BIT(6) +#define ANA_PORT_DROP_CFG_DROP_S_TAGGED_ENA BIT(5) +#define ANA_PORT_DROP_CFG_DROP_C_TAGGED_ENA BIT(4) +#define ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA BIT(3) +#define ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA BIT(2) +#define ANA_PORT_DROP_CFG_DROP_NULL_MAC_ENA BIT(1) +#define ANA_PORT_DROP_CFG_DROP_MC_SMAC_ENA BIT(0) + +#define ANA_PORT_QOS_CFG_GSZ 0x100 + +#define ANA_PORT_QOS_CFG_DP_DEFAULT_VAL BIT(8) +#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL(x) (((x) << 5) & GENMASK(7, 5)) +#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL_M GENMASK(7, 5) +#define ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL_X(x) (((x) & GENMASK(7, 5)) >> 5) +#define ANA_PORT_QOS_CFG_QOS_DSCP_ENA BIT(4) +#define ANA_PORT_QOS_CFG_QOS_PCP_ENA BIT(3) +#define ANA_PORT_QOS_CFG_DSCP_TRANSLATE_ENA BIT(2) +#define ANA_PORT_QOS_CFG_DSCP_REWR_CFG(x) ((x) & GENMASK(1, 0)) +#define ANA_PORT_QOS_CFG_DSCP_REWR_CFG_M GENMASK(1, 0) + +#define ANA_PORT_VCAP_CFG_GSZ 0x100 + +#define ANA_PORT_VCAP_CFG_S1_ENA BIT(14) +#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA(x) (((x) << 11) & GENMASK(13, 11)) +#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA_M GENMASK(13, 11) +#define ANA_PORT_VCAP_CFG_S1_DMAC_DIP_ENA_X(x) (((x) & GENMASK(13, 11)) >> 11) +#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA(x) (((x) << 8) & GENMASK(10, 8)) +#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA_M GENMASK(10, 8) +#define ANA_PORT_VCAP_CFG_S1_VLAN_INNER_TAG_ENA_X(x) (((x) & GENMASK(10, 8)) >> 8) +#define ANA_PORT_VCAP_CFG_PAG_VAL(x) ((x) & GENMASK(7, 0)) +#define ANA_PORT_VCAP_CFG_PAG_VAL_M GENMASK(7, 0) + +#define ANA_PORT_VCAP_S1_KEY_CFG_GSZ 0x100 +#define ANA_PORT_VCAP_S1_KEY_CFG_RSZ 0x4 + +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG(x) (((x) << 4) & GENMASK(6, 4)) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG_M GENMASK(6, 4) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP6_CFG_X(x) (((x) & GENMASK(6, 4)) >> 4) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG(x) (((x) << 2) & GENMASK(3, 2)) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG_M GENMASK(3, 2) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_IP4_CFG_X(x) (((x) & GENMASK(3, 2)) >> 2) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_OTHER_CFG(x) ((x) & GENMASK(1, 0)) +#define ANA_PORT_VCAP_S1_KEY_CFG_S1_KEY_OTHER_CFG_M GENMASK(1, 0) + +#define ANA_PORT_VCAP_S2_CFG_GSZ 0x100 + +#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA(x) (((x) << 17) & GENMASK(18, 17)) +#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA_M GENMASK(18, 17) +#define ANA_PORT_VCAP_S2_CFG_S2_UDP_PAYLOAD_ENA_X(x) (((x) & GENMASK(18, 17)) >> 17) +#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA(x) (((x) << 15) & GENMASK(16, 15)) +#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA_M GENMASK(16, 15) +#define ANA_PORT_VCAP_S2_CFG_S2_ETYPE_PAYLOAD_ENA_X(x) (((x) & GENMASK(16, 15)) >> 15) +#define ANA_PORT_VCAP_S2_CFG_S2_ENA BIT(14) +#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS(x) (((x) << 12) & GENMASK(13, 12)) +#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS_M GENMASK(13, 12) +#define ANA_PORT_VCAP_S2_CFG_S2_SNAP_DIS_X(x) (((x) & GENMASK(13, 12)) >> 12) +#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS(x) (((x) << 10) & GENMASK(11, 10)) +#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS_M GENMASK(11, 10) +#define ANA_PORT_VCAP_S2_CFG_S2_ARP_DIS_X(x) (((x) & GENMASK(11, 10)) >> 10) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS(x) (((x) << 8) & GENMASK(9, 8)) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS_M GENMASK(9, 8) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_TCPUDP_DIS_X(x) (((x) & GENMASK(9, 8)) >> 8) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS(x) (((x) << 6) & GENMASK(7, 6)) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS_M GENMASK(7, 6) +#define ANA_PORT_VCAP_S2_CFG_S2_IP_OTHER_DIS_X(x) (((x) & GENMASK(7, 6)) >> 6) +#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG(x) (((x) << 2) & GENMASK(5, 2)) +#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG_M GENMASK(5, 2) +#define ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG_X(x) (((x) & GENMASK(5, 2)) >> 2) +#define ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS(x) ((x) & GENMASK(1, 0)) +#define ANA_PORT_VCAP_S2_CFG_S2_OAM_DIS_M GENMASK(1, 0) + +#define ANA_PORT_PCP_DEI_MAP_GSZ 0x100 +#define ANA_PORT_PCP_DEI_MAP_RSZ 0x4 + +#define ANA_PORT_PCP_DEI_MAP_DP_PCP_DEI_VAL BIT(3) +#define ANA_PORT_PCP_DEI_MAP_QOS_PCP_DEI_VAL(x) ((x) & GENMASK(2, 0)) +#define ANA_PORT_PCP_DEI_MAP_QOS_PCP_DEI_VAL_M GENMASK(2, 0) + +#define ANA_PORT_CPU_FWD_CFG_GSZ 0x100 + +#define ANA_PORT_CPU_FWD_CFG_CPU_VRAP_REDIR_ENA BIT(7) +#define ANA_PORT_CPU_FWD_CFG_CPU_MLD_REDIR_ENA BIT(6) +#define ANA_PORT_CPU_FWD_CFG_CPU_IGMP_REDIR_ENA BIT(5) +#define ANA_PORT_CPU_FWD_CFG_CPU_IPMC_CTRL_COPY_ENA BIT(4) +#define ANA_PORT_CPU_FWD_CFG_CPU_SRC_COPY_ENA BIT(3) +#define ANA_PORT_CPU_FWD_CFG_CPU_ALLBRIDGE_DROP_ENA BIT(2) +#define ANA_PORT_CPU_FWD_CFG_CPU_ALLBRIDGE_REDIR_ENA BIT(1) +#define ANA_PORT_CPU_FWD_CFG_CPU_OAM_ENA BIT(0) + +#define ANA_PORT_CPU_FWD_BPDU_CFG_GSZ 0x100 + +#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) +#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA_M GENMASK(31, 16) +#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA(x) ((x) & GENMASK(15, 0)) +#define ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA_M GENMASK(15, 0) + +#define ANA_PORT_CPU_FWD_GARP_CFG_GSZ 0x100 + +#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) +#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA_M GENMASK(31, 16) +#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_REDIR_ENA(x) ((x) & GENMASK(15, 0)) +#define ANA_PORT_CPU_FWD_GARP_CFG_GARP_REDIR_ENA_M GENMASK(15, 0) + +#define ANA_PORT_CPU_FWD_CCM_CFG_GSZ 0x100 + +#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA(x) (((x) << 16) & GENMASK(31, 16)) +#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA_M GENMASK(31, 16) +#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_DROP_ENA_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_REDIR_ENA(x) ((x) & GENMASK(15, 0)) +#define ANA_PORT_CPU_FWD_CCM_CFG_CCM_REDIR_ENA_M GENMASK(15, 0) + +#define ANA_PORT_PORT_CFG_GSZ 0x100 + +#define ANA_PORT_PORT_CFG_SRC_MIRROR_ENA BIT(15) +#define ANA_PORT_PORT_CFG_LIMIT_DROP BIT(14) +#define ANA_PORT_PORT_CFG_LIMIT_CPU BIT(13) +#define ANA_PORT_PORT_CFG_LOCKED_PORTMOVE_DROP BIT(12) +#define ANA_PORT_PORT_CFG_LOCKED_PORTMOVE_CPU BIT(11) +#define ANA_PORT_PORT_CFG_LEARNDROP BIT(10) +#define ANA_PORT_PORT_CFG_LEARNCPU BIT(9) +#define ANA_PORT_PORT_CFG_LEARNAUTO BIT(8) +#define ANA_PORT_PORT_CFG_LEARN_ENA BIT(7) +#define ANA_PORT_PORT_CFG_RECV_ENA BIT(6) +#define ANA_PORT_PORT_CFG_PORTID_VAL(x) (((x) << 2) & GENMASK(5, 2)) +#define ANA_PORT_PORT_CFG_PORTID_VAL_M GENMASK(5, 2) +#define ANA_PORT_PORT_CFG_PORTID_VAL_X(x) (((x) & GENMASK(5, 2)) >> 2) +#define ANA_PORT_PORT_CFG_USE_B_DOM_TBL BIT(1) +#define ANA_PORT_PORT_CFG_LSR_MODE BIT(0) + +#define ANA_PORT_POL_CFG_GSZ 0x100 + +#define ANA_PORT_POL_CFG_POL_CPU_REDIR_8021 BIT(19) +#define ANA_PORT_POL_CFG_POL_CPU_REDIR_IP BIT(18) +#define ANA_PORT_POL_CFG_PORT_POL_ENA BIT(17) +#define ANA_PORT_POL_CFG_QUEUE_POL_ENA(x) (((x) << 9) & GENMASK(16, 9)) +#define ANA_PORT_POL_CFG_QUEUE_POL_ENA_M GENMASK(16, 9) +#define ANA_PORT_POL_CFG_QUEUE_POL_ENA_X(x) (((x) & GENMASK(16, 9)) >> 9) +#define ANA_PORT_POL_CFG_POL_ORDER(x) ((x) & GENMASK(8, 0)) +#define ANA_PORT_POL_CFG_POL_ORDER_M GENMASK(8, 0) + +#define ANA_PORT_PTP_CFG_GSZ 0x100 + +#define ANA_PORT_PTP_CFG_PTP_BACKPLANE_MODE BIT(0) + +#define ANA_PORT_PTP_DLY1_CFG_GSZ 0x100 + +#define ANA_PORT_PTP_DLY2_CFG_GSZ 0x100 + +#define ANA_PORT_SFID_CFG_GSZ 0x100 +#define ANA_PORT_SFID_CFG_RSZ 0x4 + +#define ANA_PORT_SFID_CFG_SFID_VALID BIT(8) +#define ANA_PORT_SFID_CFG_SFID(x) ((x) & GENMASK(7, 0)) +#define ANA_PORT_SFID_CFG_SFID_M GENMASK(7, 0) + +#define ANA_PFC_PFC_CFG_GSZ 0x40 + +#define ANA_PFC_PFC_CFG_RX_PFC_ENA(x) (((x) << 2) & GENMASK(9, 2)) +#define ANA_PFC_PFC_CFG_RX_PFC_ENA_M GENMASK(9, 2) +#define ANA_PFC_PFC_CFG_RX_PFC_ENA_X(x) (((x) & GENMASK(9, 2)) >> 2) +#define ANA_PFC_PFC_CFG_FC_LINK_SPEED(x) ((x) & GENMASK(1, 0)) +#define ANA_PFC_PFC_CFG_FC_LINK_SPEED_M GENMASK(1, 0) + +#define ANA_PFC_PFC_TIMER_GSZ 0x40 +#define ANA_PFC_PFC_TIMER_RSZ 0x4 + +#define ANA_IPT_OAM_MEP_CFG_GSZ 0x8 + +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P(x) (((x) << 6) & GENMASK(10, 6)) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P_M GENMASK(10, 6) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_P_X(x) (((x) & GENMASK(10, 6)) >> 6) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX(x) (((x) << 1) & GENMASK(5, 1)) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_M GENMASK(5, 1) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_X(x) (((x) & GENMASK(5, 1)) >> 1) +#define ANA_IPT_OAM_MEP_CFG_MEP_IDX_ENA BIT(0) + +#define ANA_IPT_IPT_GSZ 0x8 + +#define ANA_IPT_IPT_IPT_CFG(x) (((x) << 15) & GENMASK(16, 15)) +#define ANA_IPT_IPT_IPT_CFG_M GENMASK(16, 15) +#define ANA_IPT_IPT_IPT_CFG_X(x) (((x) & GENMASK(16, 15)) >> 15) +#define ANA_IPT_IPT_ISDX_P(x) (((x) << 7) & GENMASK(14, 7)) +#define ANA_IPT_IPT_ISDX_P_M GENMASK(14, 7) +#define ANA_IPT_IPT_ISDX_P_X(x) (((x) & GENMASK(14, 7)) >> 7) +#define ANA_IPT_IPT_PPT_IDX(x) ((x) & GENMASK(6, 0)) +#define ANA_IPT_IPT_PPT_IDX_M GENMASK(6, 0) + +#define ANA_PPT_PPT_RSZ 0x4 + +#define ANA_FID_MAP_FID_MAP_RSZ 0x4 + +#define ANA_FID_MAP_FID_MAP_FID_C_VAL(x) (((x) << 6) & GENMASK(11, 6)) +#define ANA_FID_MAP_FID_MAP_FID_C_VAL_M GENMASK(11, 6) +#define ANA_FID_MAP_FID_MAP_FID_C_VAL_X(x) (((x) & GENMASK(11, 6)) >> 6) +#define ANA_FID_MAP_FID_MAP_FID_B_VAL(x) ((x) & GENMASK(5, 0)) +#define ANA_FID_MAP_FID_MAP_FID_B_VAL_M GENMASK(5, 0) + +#define ANA_AGGR_CFG_AC_RND_ENA BIT(7) +#define ANA_AGGR_CFG_AC_DMAC_ENA BIT(6) +#define ANA_AGGR_CFG_AC_SMAC_ENA BIT(5) +#define ANA_AGGR_CFG_AC_IP6_FLOW_LBL_ENA BIT(4) +#define ANA_AGGR_CFG_AC_IP6_TCPUDP_ENA BIT(3) +#define ANA_AGGR_CFG_AC_IP4_SIPDIP_ENA BIT(2) +#define ANA_AGGR_CFG_AC_IP4_TCPUDP_ENA BIT(1) +#define ANA_AGGR_CFG_AC_ISDX_ENA BIT(0) + +#define ANA_CPUQ_CFG_CPUQ_MLD(x) (((x) << 27) & GENMASK(29, 27)) +#define ANA_CPUQ_CFG_CPUQ_MLD_M GENMASK(29, 27) +#define ANA_CPUQ_CFG_CPUQ_MLD_X(x) (((x) & GENMASK(29, 27)) >> 27) +#define ANA_CPUQ_CFG_CPUQ_IGMP(x) (((x) << 24) & GENMASK(26, 24)) +#define ANA_CPUQ_CFG_CPUQ_IGMP_M GENMASK(26, 24) +#define ANA_CPUQ_CFG_CPUQ_IGMP_X(x) (((x) & GENMASK(26, 24)) >> 24) +#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL(x) (((x) << 21) & GENMASK(23, 21)) +#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL_M GENMASK(23, 21) +#define ANA_CPUQ_CFG_CPUQ_IPMC_CTRL_X(x) (((x) & GENMASK(23, 21)) >> 21) +#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE(x) (((x) << 18) & GENMASK(20, 18)) +#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE_M GENMASK(20, 18) +#define ANA_CPUQ_CFG_CPUQ_ALLBRIDGE_X(x) (((x) & GENMASK(20, 18)) >> 18) +#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE(x) (((x) << 15) & GENMASK(17, 15)) +#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE_M GENMASK(17, 15) +#define ANA_CPUQ_CFG_CPUQ_LOCKED_PORTMOVE_X(x) (((x) & GENMASK(17, 15)) >> 15) +#define ANA_CPUQ_CFG_CPUQ_SRC_COPY(x) (((x) << 12) & GENMASK(14, 12)) +#define ANA_CPUQ_CFG_CPUQ_SRC_COPY_M GENMASK(14, 12) +#define ANA_CPUQ_CFG_CPUQ_SRC_COPY_X(x) (((x) & GENMASK(14, 12)) >> 12) +#define ANA_CPUQ_CFG_CPUQ_MAC_COPY(x) (((x) << 9) & GENMASK(11, 9)) +#define ANA_CPUQ_CFG_CPUQ_MAC_COPY_M GENMASK(11, 9) +#define ANA_CPUQ_CFG_CPUQ_MAC_COPY_X(x) (((x) & GENMASK(11, 9)) >> 9) +#define ANA_CPUQ_CFG_CPUQ_LRN(x) (((x) << 6) & GENMASK(8, 6)) +#define ANA_CPUQ_CFG_CPUQ_LRN_M GENMASK(8, 6) +#define ANA_CPUQ_CFG_CPUQ_LRN_X(x) (((x) & GENMASK(8, 6)) >> 6) +#define ANA_CPUQ_CFG_CPUQ_MIRROR(x) (((x) << 3) & GENMASK(5, 3)) +#define ANA_CPUQ_CFG_CPUQ_MIRROR_M GENMASK(5, 3) +#define ANA_CPUQ_CFG_CPUQ_MIRROR_X(x) (((x) & GENMASK(5, 3)) >> 3) +#define ANA_CPUQ_CFG_CPUQ_SFLOW(x) ((x) & GENMASK(2, 0)) +#define ANA_CPUQ_CFG_CPUQ_SFLOW_M GENMASK(2, 0) + +#define ANA_CPUQ_8021_CFG_RSZ 0x4 + +#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL(x) (((x) << 6) & GENMASK(8, 6)) +#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL_M GENMASK(8, 6) +#define ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL_X(x) (((x) & GENMASK(8, 6)) >> 6) +#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL(x) (((x) << 3) & GENMASK(5, 3)) +#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL_M GENMASK(5, 3) +#define ANA_CPUQ_8021_CFG_CPUQ_GARP_VAL_X(x) (((x) & GENMASK(5, 3)) >> 3) +#define ANA_CPUQ_8021_CFG_CPUQ_CCM_VAL(x) ((x) & GENMASK(2, 0)) +#define ANA_CPUQ_8021_CFG_CPUQ_CCM_VAL_M GENMASK(2, 0) + +#define ANA_DSCP_CFG_RSZ 0x4 + +#define ANA_DSCP_CFG_DP_DSCP_VAL BIT(11) +#define ANA_DSCP_CFG_QOS_DSCP_VAL(x) (((x) << 8) & GENMASK(10, 8)) +#define ANA_DSCP_CFG_QOS_DSCP_VAL_M GENMASK(10, 8) +#define ANA_DSCP_CFG_QOS_DSCP_VAL_X(x) (((x) & GENMASK(10, 8)) >> 8) +#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL(x) (((x) << 2) & GENMASK(7, 2)) +#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL_M GENMASK(7, 2) +#define ANA_DSCP_CFG_DSCP_TRANSLATE_VAL_X(x) (((x) & GENMASK(7, 2)) >> 2) +#define ANA_DSCP_CFG_DSCP_TRUST_ENA BIT(1) +#define ANA_DSCP_CFG_DSCP_REWR_ENA BIT(0) + +#define ANA_DSCP_REWR_CFG_RSZ 0x4 + +#define ANA_VCAP_RNG_TYPE_CFG_RSZ 0x4 + +#define ANA_VCAP_RNG_VAL_CFG_RSZ 0x4 + +#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL(x) (((x) << 16) & GENMASK(31, 16)) +#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL_M GENMASK(31, 16) +#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MIN_VAL_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MAX_VAL(x) ((x) & GENMASK(15, 0)) +#define ANA_VCAP_RNG_VAL_CFG_VCAP_RNG_MAX_VAL_M GENMASK(15, 0) + +#define ANA_VRAP_CFG_VRAP_VLAN_AWARE_ENA BIT(12) +#define ANA_VRAP_CFG_VRAP_VID(x) ((x) & GENMASK(11, 0)) +#define ANA_VRAP_CFG_VRAP_VID_M GENMASK(11, 0) + +#define ANA_DISCARD_CFG_DROP_TAGGING_ISDX0 BIT(3) +#define ANA_DISCARD_CFG_DROP_CTRLPROT_ISDX0 BIT(2) +#define ANA_DISCARD_CFG_DROP_TAGGING_S2_ENA BIT(1) +#define ANA_DISCARD_CFG_DROP_CTRLPROT_S2_ENA BIT(0) + +#define ANA_FID_CFG_VID_MC_ENA BIT(0) + +#define ANA_POL_PIR_CFG_GSZ 0x20 + +#define ANA_POL_PIR_CFG_PIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) +#define ANA_POL_PIR_CFG_PIR_RATE_M GENMASK(20, 6) +#define ANA_POL_PIR_CFG_PIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) +#define ANA_POL_PIR_CFG_PIR_BURST(x) ((x) & GENMASK(5, 0)) +#define ANA_POL_PIR_CFG_PIR_BURST_M GENMASK(5, 0) + +#define ANA_POL_CIR_CFG_GSZ 0x20 + +#define ANA_POL_CIR_CFG_CIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) +#define ANA_POL_CIR_CFG_CIR_RATE_M GENMASK(20, 6) +#define ANA_POL_CIR_CFG_CIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) +#define ANA_POL_CIR_CFG_CIR_BURST(x) ((x) & GENMASK(5, 0)) +#define ANA_POL_CIR_CFG_CIR_BURST_M GENMASK(5, 0) + +#define ANA_POL_MODE_CFG_GSZ 0x20 + +#define ANA_POL_MODE_CFG_IPG_SIZE(x) (((x) << 5) & GENMASK(9, 5)) +#define ANA_POL_MODE_CFG_IPG_SIZE_M GENMASK(9, 5) +#define ANA_POL_MODE_CFG_IPG_SIZE_X(x) (((x) & GENMASK(9, 5)) >> 5) +#define ANA_POL_MODE_CFG_FRM_MODE(x) (((x) << 3) & GENMASK(4, 3)) +#define ANA_POL_MODE_CFG_FRM_MODE_M GENMASK(4, 3) +#define ANA_POL_MODE_CFG_FRM_MODE_X(x) (((x) & GENMASK(4, 3)) >> 3) +#define ANA_POL_MODE_CFG_DLB_COUPLED BIT(2) +#define ANA_POL_MODE_CFG_CIR_ENA BIT(1) +#define ANA_POL_MODE_CFG_OVERSHOOT_ENA BIT(0) + +#define ANA_POL_PIR_STATE_GSZ 0x20 + +#define ANA_POL_CIR_STATE_GSZ 0x20 + +#define ANA_POL_STATE_GSZ 0x20 + +#define ANA_POL_FLOWC_RSZ 0x4 + +#define ANA_POL_FLOWC_POL_FLOWC BIT(0) + +#define ANA_POL_HYST_POL_FC_HYST(x) (((x) << 4) & GENMASK(9, 4)) +#define ANA_POL_HYST_POL_FC_HYST_M GENMASK(9, 4) +#define ANA_POL_HYST_POL_FC_HYST_X(x) (((x) & GENMASK(9, 4)) >> 4) +#define ANA_POL_HYST_POL_STOP_HYST(x) ((x) & GENMASK(3, 0)) +#define ANA_POL_HYST_POL_STOP_HYST_M GENMASK(3, 0) + +#define ANA_POL_MISC_CFG_POL_CLOSE_ALL BIT(1) +#define ANA_POL_MISC_CFG_POL_LEAK_DIS BIT(0) + +#endif diff --git a/include/soc/mscc/ocelot_dev.h b/include/soc/mscc/ocelot_dev.h new file mode 100644 index 000000000000..0a50d53bbd3f --- /dev/null +++ b/include/soc/mscc/ocelot_dev.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ +/* + * Microsemi Ocelot Switch driver + * + * Copyright (c) 2017 Microsemi Corporation + */ + +#ifndef _MSCC_OCELOT_DEV_H_ +#define _MSCC_OCELOT_DEV_H_ + +#define DEV_CLOCK_CFG 0x0 + +#define DEV_CLOCK_CFG_MAC_TX_RST BIT(7) +#define DEV_CLOCK_CFG_MAC_RX_RST BIT(6) +#define DEV_CLOCK_CFG_PCS_TX_RST BIT(5) +#define DEV_CLOCK_CFG_PCS_RX_RST BIT(4) +#define DEV_CLOCK_CFG_PORT_RST BIT(3) +#define DEV_CLOCK_CFG_PHY_RST BIT(2) +#define DEV_CLOCK_CFG_LINK_SPEED(x) ((x) & GENMASK(1, 0)) +#define DEV_CLOCK_CFG_LINK_SPEED_M GENMASK(1, 0) + +#define DEV_PORT_MISC 0x4 + +#define DEV_PORT_MISC_FWD_ERROR_ENA BIT(4) +#define DEV_PORT_MISC_FWD_PAUSE_ENA BIT(3) +#define DEV_PORT_MISC_FWD_CTRL_ENA BIT(2) +#define DEV_PORT_MISC_DEV_LOOP_ENA BIT(1) +#define DEV_PORT_MISC_HDX_FAST_DIS BIT(0) + +#define DEV_EVENTS 0x8 + +#define DEV_EEE_CFG 0xc + +#define DEV_EEE_CFG_EEE_ENA BIT(22) +#define DEV_EEE_CFG_EEE_TIMER_AGE(x) (((x) << 15) & GENMASK(21, 15)) +#define DEV_EEE_CFG_EEE_TIMER_AGE_M GENMASK(21, 15) +#define DEV_EEE_CFG_EEE_TIMER_AGE_X(x) (((x) & GENMASK(21, 15)) >> 15) +#define DEV_EEE_CFG_EEE_TIMER_WAKEUP(x) (((x) << 8) & GENMASK(14, 8)) +#define DEV_EEE_CFG_EEE_TIMER_WAKEUP_M GENMASK(14, 8) +#define DEV_EEE_CFG_EEE_TIMER_WAKEUP_X(x) (((x) & GENMASK(14, 8)) >> 8) +#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF(x) (((x) << 1) & GENMASK(7, 1)) +#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF_M GENMASK(7, 1) +#define DEV_EEE_CFG_EEE_TIMER_HOLDOFF_X(x) (((x) & GENMASK(7, 1)) >> 1) +#define DEV_EEE_CFG_PORT_LPI BIT(0) + +#define DEV_RX_PATH_DELAY 0x10 + +#define DEV_TX_PATH_DELAY 0x14 + +#define DEV_PTP_PREDICT_CFG 0x18 + +#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG(x) (((x) << 4) & GENMASK(11, 4)) +#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG_M GENMASK(11, 4) +#define DEV_PTP_PREDICT_CFG_PTP_PHY_PREDICT_CFG_X(x) (((x) & GENMASK(11, 4)) >> 4) +#define DEV_PTP_PREDICT_CFG_PTP_PHASE_PREDICT_CFG(x) ((x) & GENMASK(3, 0)) +#define DEV_PTP_PREDICT_CFG_PTP_PHASE_PREDICT_CFG_M GENMASK(3, 0) + +#define DEV_MAC_ENA_CFG 0x1c + +#define DEV_MAC_ENA_CFG_RX_ENA BIT(4) +#define DEV_MAC_ENA_CFG_TX_ENA BIT(0) + +#define DEV_MAC_MODE_CFG 0x20 + +#define DEV_MAC_MODE_CFG_FC_WORD_SYNC_ENA BIT(8) +#define DEV_MAC_MODE_CFG_GIGA_MODE_ENA BIT(4) +#define DEV_MAC_MODE_CFG_FDX_ENA BIT(0) + +#define DEV_MAC_MAXLEN_CFG 0x24 + +#define DEV_MAC_TAGS_CFG 0x28 + +#define DEV_MAC_TAGS_CFG_TAG_ID(x) (((x) << 16) & GENMASK(31, 16)) +#define DEV_MAC_TAGS_CFG_TAG_ID_M GENMASK(31, 16) +#define DEV_MAC_TAGS_CFG_TAG_ID_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA BIT(2) +#define DEV_MAC_TAGS_CFG_PB_ENA BIT(1) +#define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0) + +#define DEV_MAC_ADV_CHK_CFG 0x2c + +#define DEV_MAC_ADV_CHK_CFG_LEN_DROP_ENA BIT(0) + +#define DEV_MAC_IFG_CFG 0x30 + +#define DEV_MAC_IFG_CFG_RESTORE_OLD_IPG_CHECK BIT(17) +#define DEV_MAC_IFG_CFG_REDUCED_TX_IFG BIT(16) +#define DEV_MAC_IFG_CFG_TX_IFG(x) (((x) << 8) & GENMASK(12, 8)) +#define DEV_MAC_IFG_CFG_TX_IFG_M GENMASK(12, 8) +#define DEV_MAC_IFG_CFG_TX_IFG_X(x) (((x) & GENMASK(12, 8)) >> 8) +#define DEV_MAC_IFG_CFG_RX_IFG2(x) (((x) << 4) & GENMASK(7, 4)) +#define DEV_MAC_IFG_CFG_RX_IFG2_M GENMASK(7, 4) +#define DEV_MAC_IFG_CFG_RX_IFG2_X(x) (((x) & GENMASK(7, 4)) >> 4) +#define DEV_MAC_IFG_CFG_RX_IFG1(x) ((x) & GENMASK(3, 0)) +#define DEV_MAC_IFG_CFG_RX_IFG1_M GENMASK(3, 0) + +#define DEV_MAC_HDX_CFG 0x34 + +#define DEV_MAC_HDX_CFG_BYPASS_COL_SYNC BIT(26) +#define DEV_MAC_HDX_CFG_OB_ENA BIT(25) +#define DEV_MAC_HDX_CFG_WEXC_DIS BIT(24) +#define DEV_MAC_HDX_CFG_SEED(x) (((x) << 16) & GENMASK(23, 16)) +#define DEV_MAC_HDX_CFG_SEED_M GENMASK(23, 16) +#define DEV_MAC_HDX_CFG_SEED_X(x) (((x) & GENMASK(23, 16)) >> 16) +#define DEV_MAC_HDX_CFG_SEED_LOAD BIT(12) +#define DEV_MAC_HDX_CFG_RETRY_AFTER_EXC_COL_ENA BIT(8) +#define DEV_MAC_HDX_CFG_LATE_COL_POS(x) ((x) & GENMASK(6, 0)) +#define DEV_MAC_HDX_CFG_LATE_COL_POS_M GENMASK(6, 0) + +#define DEV_MAC_DBG_CFG 0x38 + +#define DEV_MAC_DBG_CFG_TBI_MODE BIT(4) +#define DEV_MAC_DBG_CFG_IFG_CRS_EXT_CHK_ENA BIT(0) + +#define DEV_MAC_FC_MAC_LOW_CFG 0x3c + +#define DEV_MAC_FC_MAC_HIGH_CFG 0x40 + +#define DEV_MAC_STICKY 0x44 + +#define DEV_MAC_STICKY_RX_IPG_SHRINK_STICKY BIT(9) +#define DEV_MAC_STICKY_RX_PREAM_SHRINK_STICKY BIT(8) +#define DEV_MAC_STICKY_RX_CARRIER_EXT_STICKY BIT(7) +#define DEV_MAC_STICKY_RX_CARRIER_EXT_ERR_STICKY BIT(6) +#define DEV_MAC_STICKY_RX_JUNK_STICKY BIT(5) +#define DEV_MAC_STICKY_TX_RETRANSMIT_STICKY BIT(4) +#define DEV_MAC_STICKY_TX_JAM_STICKY BIT(3) +#define DEV_MAC_STICKY_TX_FIFO_OFLW_STICKY BIT(2) +#define DEV_MAC_STICKY_TX_FRM_LEN_OVR_STICKY BIT(1) +#define DEV_MAC_STICKY_TX_ABORT_STICKY BIT(0) + +#define PCS1G_CFG 0x48 + +#define PCS1G_CFG_LINK_STATUS_TYPE BIT(4) +#define PCS1G_CFG_AN_LINK_CTRL_ENA BIT(1) +#define PCS1G_CFG_PCS_ENA BIT(0) + +#define PCS1G_MODE_CFG 0x4c + +#define PCS1G_MODE_CFG_UNIDIR_MODE_ENA BIT(4) +#define PCS1G_MODE_CFG_SGMII_MODE_ENA BIT(0) + +#define PCS1G_SD_CFG 0x50 + +#define PCS1G_SD_CFG_SD_SEL BIT(8) +#define PCS1G_SD_CFG_SD_POL BIT(4) +#define PCS1G_SD_CFG_SD_ENA BIT(0) + +#define PCS1G_ANEG_CFG 0x54 + +#define PCS1G_ANEG_CFG_ADV_ABILITY(x) (((x) << 16) & GENMASK(31, 16)) +#define PCS1G_ANEG_CFG_ADV_ABILITY_M GENMASK(31, 16) +#define PCS1G_ANEG_CFG_ADV_ABILITY_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define PCS1G_ANEG_CFG_SW_RESOLVE_ENA BIT(8) +#define PCS1G_ANEG_CFG_ANEG_RESTART_ONE_SHOT BIT(1) +#define PCS1G_ANEG_CFG_ANEG_ENA BIT(0) + +#define PCS1G_ANEG_NP_CFG 0x58 + +#define PCS1G_ANEG_NP_CFG_NP_TX(x) (((x) << 16) & GENMASK(31, 16)) +#define PCS1G_ANEG_NP_CFG_NP_TX_M GENMASK(31, 16) +#define PCS1G_ANEG_NP_CFG_NP_TX_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define PCS1G_ANEG_NP_CFG_NP_LOADED_ONE_SHOT BIT(0) + +#define PCS1G_LB_CFG 0x5c + +#define PCS1G_LB_CFG_RA_ENA BIT(4) +#define PCS1G_LB_CFG_GMII_PHY_LB_ENA BIT(1) +#define PCS1G_LB_CFG_TBI_HOST_LB_ENA BIT(0) + +#define PCS1G_DBG_CFG 0x60 + +#define PCS1G_DBG_CFG_UDLT BIT(0) + +#define PCS1G_CDET_CFG 0x64 + +#define PCS1G_CDET_CFG_CDET_ENA BIT(0) + +#define PCS1G_ANEG_STATUS 0x68 + +#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY(x) (((x) << 16) & GENMASK(31, 16)) +#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY_M GENMASK(31, 16) +#define PCS1G_ANEG_STATUS_LP_ADV_ABILITY_X(x) (((x) & GENMASK(31, 16)) >> 16) +#define PCS1G_ANEG_STATUS_PR BIT(4) +#define PCS1G_ANEG_STATUS_PAGE_RX_STICKY BIT(3) +#define PCS1G_ANEG_STATUS_ANEG_COMPLETE BIT(0) + +#define PCS1G_ANEG_NP_STATUS 0x6c + +#define PCS1G_LINK_STATUS 0x70 + +#define PCS1G_LINK_STATUS_DELAY_VAR(x) (((x) << 12) & GENMASK(15, 12)) +#define PCS1G_LINK_STATUS_DELAY_VAR_M GENMASK(15, 12) +#define PCS1G_LINK_STATUS_DELAY_VAR_X(x) (((x) & GENMASK(15, 12)) >> 12) +#define PCS1G_LINK_STATUS_SIGNAL_DETECT BIT(8) +#define PCS1G_LINK_STATUS_LINK_STATUS BIT(4) +#define PCS1G_LINK_STATUS_SYNC_STATUS BIT(0) + +#define PCS1G_LINK_DOWN_CNT 0x74 + +#define PCS1G_STICKY 0x78 + +#define PCS1G_STICKY_LINK_DOWN_STICKY BIT(4) +#define PCS1G_STICKY_OUT_OF_SYNC_STICKY BIT(0) + +#define PCS1G_DEBUG_STATUS 0x7c + +#define PCS1G_LPI_CFG 0x80 + +#define PCS1G_LPI_CFG_QSGMII_MS_SEL BIT(20) +#define PCS1G_LPI_CFG_RX_LPI_OUT_DIS BIT(17) +#define PCS1G_LPI_CFG_LPI_TESTMODE BIT(16) +#define PCS1G_LPI_CFG_LPI_RX_WTIM(x) (((x) << 4) & GENMASK(5, 4)) +#define PCS1G_LPI_CFG_LPI_RX_WTIM_M GENMASK(5, 4) +#define PCS1G_LPI_CFG_LPI_RX_WTIM_X(x) (((x) & GENMASK(5, 4)) >> 4) +#define PCS1G_LPI_CFG_TX_ASSERT_LPIDLE BIT(0) + +#define PCS1G_LPI_WAKE_ERROR_CNT 0x84 + +#define PCS1G_LPI_STATUS 0x88 + +#define PCS1G_LPI_STATUS_RX_LPI_FAIL BIT(16) +#define PCS1G_LPI_STATUS_RX_LPI_EVENT_STICKY BIT(12) +#define PCS1G_LPI_STATUS_RX_QUIET BIT(9) +#define PCS1G_LPI_STATUS_RX_LPI_MODE BIT(8) +#define PCS1G_LPI_STATUS_TX_LPI_EVENT_STICKY BIT(4) +#define PCS1G_LPI_STATUS_TX_QUIET BIT(1) +#define PCS1G_LPI_STATUS_TX_LPI_MODE BIT(0) + +#define PCS1G_TSTPAT_MODE_CFG 0x8c + +#define PCS1G_TSTPAT_STATUS 0x90 + +#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT(x) (((x) << 8) & GENMASK(15, 8)) +#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT_M GENMASK(15, 8) +#define PCS1G_TSTPAT_STATUS_JTP_ERR_CNT_X(x) (((x) & GENMASK(15, 8)) >> 8) +#define PCS1G_TSTPAT_STATUS_JTP_ERR BIT(4) +#define PCS1G_TSTPAT_STATUS_JTP_LOCK BIT(0) + +#define DEV_PCS_FX100_CFG 0x94 + +#define DEV_PCS_FX100_CFG_SD_SEL BIT(26) +#define DEV_PCS_FX100_CFG_SD_POL BIT(25) +#define DEV_PCS_FX100_CFG_SD_ENA BIT(24) +#define DEV_PCS_FX100_CFG_LOOPBACK_ENA BIT(20) +#define DEV_PCS_FX100_CFG_SWAP_MII_ENA BIT(16) +#define DEV_PCS_FX100_CFG_RXBITSEL(x) (((x) << 12) & GENMASK(15, 12)) +#define DEV_PCS_FX100_CFG_RXBITSEL_M GENMASK(15, 12) +#define DEV_PCS_FX100_CFG_RXBITSEL_X(x) (((x) & GENMASK(15, 12)) >> 12) +#define DEV_PCS_FX100_CFG_SIGDET_CFG(x) (((x) << 9) & GENMASK(10, 9)) +#define DEV_PCS_FX100_CFG_SIGDET_CFG_M GENMASK(10, 9) +#define DEV_PCS_FX100_CFG_SIGDET_CFG_X(x) (((x) & GENMASK(10, 9)) >> 9) +#define DEV_PCS_FX100_CFG_LINKHYST_TM_ENA BIT(8) +#define DEV_PCS_FX100_CFG_LINKHYSTTIMER(x) (((x) << 4) & GENMASK(7, 4)) +#define DEV_PCS_FX100_CFG_LINKHYSTTIMER_M GENMASK(7, 4) +#define DEV_PCS_FX100_CFG_LINKHYSTTIMER_X(x) (((x) & GENMASK(7, 4)) >> 4) +#define DEV_PCS_FX100_CFG_UNIDIR_MODE_ENA BIT(3) +#define DEV_PCS_FX100_CFG_FEFCHK_ENA BIT(2) +#define DEV_PCS_FX100_CFG_FEFGEN_ENA BIT(1) +#define DEV_PCS_FX100_CFG_PCS_ENA BIT(0) + +#define DEV_PCS_FX100_STATUS 0x98 + +#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP(x) (((x) << 8) & GENMASK(11, 8)) +#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP_M GENMASK(11, 8) +#define DEV_PCS_FX100_STATUS_EDGE_POS_PTP_X(x) (((x) & GENMASK(11, 8)) >> 8) +#define DEV_PCS_FX100_STATUS_PCS_ERROR_STICKY BIT(7) +#define DEV_PCS_FX100_STATUS_FEF_FOUND_STICKY BIT(6) +#define DEV_PCS_FX100_STATUS_SSD_ERROR_STICKY BIT(5) +#define DEV_PCS_FX100_STATUS_SYNC_LOST_STICKY BIT(4) +#define DEV_PCS_FX100_STATUS_FEF_STATUS BIT(2) +#define DEV_PCS_FX100_STATUS_SIGNAL_DETECT BIT(1) +#define DEV_PCS_FX100_STATUS_SYNC_STATUS BIT(0) + +#endif diff --git a/include/soc/mscc/ocelot_qsys.h b/include/soc/mscc/ocelot_qsys.h new file mode 100644 index 000000000000..d8c63aa761be --- /dev/null +++ b/include/soc/mscc/ocelot_qsys.h @@ -0,0 +1,270 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ +/* + * Microsemi Ocelot Switch driver + * + * Copyright (c) 2017 Microsemi Corporation + */ + +#ifndef _MSCC_OCELOT_QSYS_H_ +#define _MSCC_OCELOT_QSYS_H_ + +#define QSYS_PORT_MODE_RSZ 0x4 + +#define QSYS_PORT_MODE_DEQUEUE_DIS BIT(1) +#define QSYS_PORT_MODE_DEQUEUE_LATE BIT(0) + +#define QSYS_SWITCH_PORT_MODE_RSZ 0x4 + +#define QSYS_SWITCH_PORT_MODE_PORT_ENA BIT(14) +#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG(x) (((x) << 11) & GENMASK(13, 11)) +#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG_M GENMASK(13, 11) +#define QSYS_SWITCH_PORT_MODE_SCH_NEXT_CFG_X(x) (((x) & GENMASK(13, 11)) >> 11) +#define QSYS_SWITCH_PORT_MODE_YEL_RSRVD BIT(10) +#define QSYS_SWITCH_PORT_MODE_INGRESS_DROP_MODE BIT(9) +#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA(x) (((x) << 1) & GENMASK(8, 1)) +#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA_M GENMASK(8, 1) +#define QSYS_SWITCH_PORT_MODE_TX_PFC_ENA_X(x) (((x) & GENMASK(8, 1)) >> 1) +#define QSYS_SWITCH_PORT_MODE_TX_PFC_MODE BIT(0) + +#define QSYS_STAT_CNT_CFG_TX_GREEN_CNT_MODE BIT(5) +#define QSYS_STAT_CNT_CFG_TX_YELLOW_CNT_MODE BIT(4) +#define QSYS_STAT_CNT_CFG_DROP_GREEN_CNT_MODE BIT(3) +#define QSYS_STAT_CNT_CFG_DROP_YELLOW_CNT_MODE BIT(2) +#define QSYS_STAT_CNT_CFG_DROP_COUNT_ONCE BIT(1) +#define QSYS_STAT_CNT_CFG_DROP_COUNT_EGRESS BIT(0) + +#define QSYS_EEE_CFG_RSZ 0x4 + +#define QSYS_EEE_THRES_EEE_HIGH_BYTES(x) (((x) << 8) & GENMASK(15, 8)) +#define QSYS_EEE_THRES_EEE_HIGH_BYTES_M GENMASK(15, 8) +#define QSYS_EEE_THRES_EEE_HIGH_BYTES_X(x) (((x) & GENMASK(15, 8)) >> 8) +#define QSYS_EEE_THRES_EEE_HIGH_FRAMES(x) ((x) & GENMASK(7, 0)) +#define QSYS_EEE_THRES_EEE_HIGH_FRAMES_M GENMASK(7, 0) + +#define QSYS_SW_STATUS_RSZ 0x4 + +#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT(x) (((x) << 8) & GENMASK(12, 8)) +#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT_M GENMASK(12, 8) +#define QSYS_EXT_CPU_CFG_EXT_CPU_PORT_X(x) (((x) & GENMASK(12, 8)) >> 8) +#define QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK(x) ((x) & GENMASK(7, 0)) +#define QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M GENMASK(7, 0) + +#define QSYS_QMAP_GSZ 0x4 + +#define QSYS_QMAP_SE_BASE(x) (((x) << 5) & GENMASK(12, 5)) +#define QSYS_QMAP_SE_BASE_M GENMASK(12, 5) +#define QSYS_QMAP_SE_BASE_X(x) (((x) & GENMASK(12, 5)) >> 5) +#define QSYS_QMAP_SE_IDX_SEL(x) (((x) << 2) & GENMASK(4, 2)) +#define QSYS_QMAP_SE_IDX_SEL_M GENMASK(4, 2) +#define QSYS_QMAP_SE_IDX_SEL_X(x) (((x) & GENMASK(4, 2)) >> 2) +#define QSYS_QMAP_SE_INP_SEL(x) ((x) & GENMASK(1, 0)) +#define QSYS_QMAP_SE_INP_SEL_M GENMASK(1, 0) + +#define QSYS_ISDX_SGRP_GSZ 0x4 + +#define QSYS_TIMED_FRAME_ENTRY_GSZ 0x4 + +#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT(x) (((x) << 9) & GENMASK(18, 9)) +#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT_M GENMASK(18, 9) +#define QSYS_TFRM_MISC_TIMED_CANCEL_SLOT_X(x) (((x) & GENMASK(18, 9)) >> 9) +#define QSYS_TFRM_MISC_TIMED_CANCEL_1SHOT BIT(8) +#define QSYS_TFRM_MISC_TIMED_SLOT_MODE_MC BIT(7) +#define QSYS_TFRM_MISC_TIMED_ENTRY_FAST_CNT(x) ((x) & GENMASK(6, 0)) +#define QSYS_TFRM_MISC_TIMED_ENTRY_FAST_CNT_M GENMASK(6, 0) + +#define QSYS_RED_PROFILE_RSZ 0x4 + +#define QSYS_RED_PROFILE_WM_RED_LOW(x) (((x) << 8) & GENMASK(15, 8)) +#define QSYS_RED_PROFILE_WM_RED_LOW_M GENMASK(15, 8) +#define QSYS_RED_PROFILE_WM_RED_LOW_X(x) (((x) & GENMASK(15, 8)) >> 8) +#define QSYS_RED_PROFILE_WM_RED_HIGH(x) ((x) & GENMASK(7, 0)) +#define QSYS_RED_PROFILE_WM_RED_HIGH_M GENMASK(7, 0) + +#define QSYS_RES_CFG_GSZ 0x8 + +#define QSYS_RES_STAT_GSZ 0x8 + +#define QSYS_RES_STAT_INUSE(x) (((x) << 12) & GENMASK(23, 12)) +#define QSYS_RES_STAT_INUSE_M GENMASK(23, 12) +#define QSYS_RES_STAT_INUSE_X(x) (((x) & GENMASK(23, 12)) >> 12) +#define QSYS_RES_STAT_MAXUSE(x) ((x) & GENMASK(11, 0)) +#define QSYS_RES_STAT_MAXUSE_M GENMASK(11, 0) + +#define QSYS_EVENTS_CORE_EV_FDC(x) (((x) << 2) & GENMASK(4, 2)) +#define QSYS_EVENTS_CORE_EV_FDC_M GENMASK(4, 2) +#define QSYS_EVENTS_CORE_EV_FDC_X(x) (((x) & GENMASK(4, 2)) >> 2) +#define QSYS_EVENTS_CORE_EV_FRD(x) ((x) & GENMASK(1, 0)) +#define QSYS_EVENTS_CORE_EV_FRD_M GENMASK(1, 0) + +#define QSYS_QMAXSDU_CFG_0_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_1_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_2_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_3_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_4_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_5_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_6_RSZ 0x4 + +#define QSYS_QMAXSDU_CFG_7_RSZ 0x4 + +#define QSYS_PREEMPTION_CFG_RSZ 0x4 + +#define QSYS_PREEMPTION_CFG_P_QUEUES(x) ((x) & GENMASK(7, 0)) +#define QSYS_PREEMPTION_CFG_P_QUEUES_M GENMASK(7, 0) +#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE(x) (((x) << 8) & GENMASK(9, 8)) +#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_M GENMASK(9, 8) +#define QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(x) (((x) & GENMASK(9, 8)) >> 8) +#define QSYS_PREEMPTION_CFG_STRICT_IPG(x) (((x) << 12) & GENMASK(13, 12)) +#define QSYS_PREEMPTION_CFG_STRICT_IPG_M GENMASK(13, 12) +#define QSYS_PREEMPTION_CFG_STRICT_IPG_X(x) (((x) & GENMASK(13, 12)) >> 12) +#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE(x) (((x) << 16) & GENMASK(31, 16)) +#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE_M GENMASK(31, 16) +#define QSYS_PREEMPTION_CFG_HOLD_ADVANCE_X(x) (((x) & GENMASK(31, 16)) >> 16) + +#define QSYS_CIR_CFG_GSZ 0x80 + +#define QSYS_CIR_CFG_CIR_RATE(x) (((x) << 6) & GENMASK(20, 6)) +#define QSYS_CIR_CFG_CIR_RATE_M GENMASK(20, 6) +#define QSYS_CIR_CFG_CIR_RATE_X(x) (((x) & GENMASK(20, 6)) >> 6) +#define QSYS_CIR_CFG_CIR_BURST(x) ((x) & GENMASK(5, 0)) +#define QSYS_CIR_CFG_CIR_BURST_M GENMASK(5, 0) + +#define QSYS_EIR_CFG_GSZ 0x80 + +#define QSYS_EIR_CFG_EIR_RATE(x) (((x) << 7) & GENMASK(21, 7)) +#define QSYS_EIR_CFG_EIR_RATE_M GENMASK(21, 7) +#define QSYS_EIR_CFG_EIR_RATE_X(x) (((x) & GENMASK(21, 7)) >> 7) +#define QSYS_EIR_CFG_EIR_BURST(x) (((x) << 1) & GENMASK(6, 1)) +#define QSYS_EIR_CFG_EIR_BURST_M GENMASK(6, 1) +#define QSYS_EIR_CFG_EIR_BURST_X(x) (((x) & GENMASK(6, 1)) >> 1) +#define QSYS_EIR_CFG_EIR_MARK_ENA BIT(0) + +#define QSYS_SE_CFG_GSZ 0x80 + +#define QSYS_SE_CFG_SE_DWRR_CNT(x) (((x) << 6) & GENMASK(9, 6)) +#define QSYS_SE_CFG_SE_DWRR_CNT_M GENMASK(9, 6) +#define QSYS_SE_CFG_SE_DWRR_CNT_X(x) (((x) & GENMASK(9, 6)) >> 6) +#define QSYS_SE_CFG_SE_RR_ENA BIT(5) +#define QSYS_SE_CFG_SE_AVB_ENA BIT(4) +#define QSYS_SE_CFG_SE_FRM_MODE(x) (((x) << 2) & GENMASK(3, 2)) +#define QSYS_SE_CFG_SE_FRM_MODE_M GENMASK(3, 2) +#define QSYS_SE_CFG_SE_FRM_MODE_X(x) (((x) & GENMASK(3, 2)) >> 2) +#define QSYS_SE_CFG_SE_EXC_ENA BIT(1) +#define QSYS_SE_CFG_SE_EXC_FWD BIT(0) + +#define QSYS_SE_DWRR_CFG_GSZ 0x80 +#define QSYS_SE_DWRR_CFG_RSZ 0x4 + +#define QSYS_SE_CONNECT_GSZ 0x80 + +#define QSYS_SE_CONNECT_SE_OUTP_IDX(x) (((x) << 17) & GENMASK(24, 17)) +#define QSYS_SE_CONNECT_SE_OUTP_IDX_M GENMASK(24, 17) +#define QSYS_SE_CONNECT_SE_OUTP_IDX_X(x) (((x) & GENMASK(24, 17)) >> 17) +#define QSYS_SE_CONNECT_SE_INP_IDX(x) (((x) << 9) & GENMASK(16, 9)) +#define QSYS_SE_CONNECT_SE_INP_IDX_M GENMASK(16, 9) +#define QSYS_SE_CONNECT_SE_INP_IDX_X(x) (((x) & GENMASK(16, 9)) >> 9) +#define QSYS_SE_CONNECT_SE_OUTP_CON(x) (((x) << 5) & GENMASK(8, 5)) +#define QSYS_SE_CONNECT_SE_OUTP_CON_M GENMASK(8, 5) +#define QSYS_SE_CONNECT_SE_OUTP_CON_X(x) (((x) & GENMASK(8, 5)) >> 5) +#define QSYS_SE_CONNECT_SE_INP_CNT(x) (((x) << 1) & GENMASK(4, 1)) +#define QSYS_SE_CONNECT_SE_INP_CNT_M GENMASK(4, 1) +#define QSYS_SE_CONNECT_SE_INP_CNT_X(x) (((x) & GENMASK(4, 1)) >> 1) +#define QSYS_SE_CONNECT_SE_TERMINAL BIT(0) + +#define QSYS_SE_DLB_SENSE_GSZ 0x80 + +#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO(x) (((x) << 11) & GENMASK(13, 11)) +#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_M GENMASK(13, 11) +#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_X(x) (((x) & GENMASK(13, 11)) >> 11) +#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT(x) (((x) << 7) & GENMASK(10, 7)) +#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_M GENMASK(10, 7) +#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_X(x) (((x) & GENMASK(10, 7)) >> 7) +#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT(x) (((x) << 3) & GENMASK(6, 3)) +#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_M GENMASK(6, 3) +#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_X(x) (((x) & GENMASK(6, 3)) >> 3) +#define QSYS_SE_DLB_SENSE_SE_DLB_PRIO_ENA BIT(2) +#define QSYS_SE_DLB_SENSE_SE_DLB_SPORT_ENA BIT(1) +#define QSYS_SE_DLB_SENSE_SE_DLB_DPORT_ENA BIT(0) + +#define QSYS_CIR_STATE_GSZ 0x80 + +#define QSYS_CIR_STATE_CIR_LVL(x) (((x) << 4) & GENMASK(25, 4)) +#define QSYS_CIR_STATE_CIR_LVL_M GENMASK(25, 4) +#define QSYS_CIR_STATE_CIR_LVL_X(x) (((x) & GENMASK(25, 4)) >> 4) +#define QSYS_CIR_STATE_SHP_TIME(x) ((x) & GENMASK(3, 0)) +#define QSYS_CIR_STATE_SHP_TIME_M GENMASK(3, 0) + +#define QSYS_EIR_STATE_GSZ 0x80 + +#define QSYS_SE_STATE_GSZ 0x80 + +#define QSYS_SE_STATE_SE_OUTP_LVL(x) (((x) << 1) & GENMASK(2, 1)) +#define QSYS_SE_STATE_SE_OUTP_LVL_M GENMASK(2, 1) +#define QSYS_SE_STATE_SE_OUTP_LVL_X(x) (((x) & GENMASK(2, 1)) >> 1) +#define QSYS_SE_STATE_SE_WAS_YEL BIT(0) + +#define QSYS_HSCH_MISC_CFG_SE_CONNECT_VLD BIT(8) +#define QSYS_HSCH_MISC_CFG_FRM_ADJ(x) (((x) << 3) & GENMASK(7, 3)) +#define QSYS_HSCH_MISC_CFG_FRM_ADJ_M GENMASK(7, 3) +#define QSYS_HSCH_MISC_CFG_FRM_ADJ_X(x) (((x) & GENMASK(7, 3)) >> 3) +#define QSYS_HSCH_MISC_CFG_LEAK_DIS BIT(2) +#define QSYS_HSCH_MISC_CFG_QSHP_EXC_ENA BIT(1) +#define QSYS_HSCH_MISC_CFG_PFC_BYP_UPD BIT(0) + +#define QSYS_TAG_CONFIG_RSZ 0x4 + +#define QSYS_TAG_CONFIG_ENABLE BIT(0) +#define QSYS_TAG_CONFIG_LINK_SPEED(x) (((x) << 4) & GENMASK(5, 4)) +#define QSYS_TAG_CONFIG_LINK_SPEED_M GENMASK(5, 4) +#define QSYS_TAG_CONFIG_LINK_SPEED_X(x) (((x) & GENMASK(5, 4)) >> 4) +#define QSYS_TAG_CONFIG_INIT_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) +#define QSYS_TAG_CONFIG_INIT_GATE_STATE_M GENMASK(15, 8) +#define QSYS_TAG_CONFIG_INIT_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) +#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES(x) (((x) << 16) & GENMASK(23, 16)) +#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES_M GENMASK(23, 16) +#define QSYS_TAG_CONFIG_SCH_TRAFFIC_QUEUES_X(x) (((x) & GENMASK(23, 16)) >> 16) + +#define QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM(x) ((x) & GENMASK(7, 0)) +#define QSYS_TAS_PARAM_CFG_CTRL_PORT_NUM_M GENMASK(7, 0) +#define QSYS_TAS_PARAM_CFG_CTRL_ALWAYS_GUARD_BAND_SCH_Q BIT(8) +#define QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE BIT(16) + +#define QSYS_PORT_MAX_SDU_RSZ 0x4 + +#define QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) +#define QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) +#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(31, 16)) +#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH_M GENMASK(31, 16) +#define QSYS_PARAM_CFG_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(31, 16)) >> 16) + +#define QSYS_GCL_CFG_REG_1_GCL_ENTRY_NUM(x) ((x) & GENMASK(5, 0)) +#define QSYS_GCL_CFG_REG_1_GCL_ENTRY_NUM_M GENMASK(5, 0) +#define QSYS_GCL_CFG_REG_1_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) +#define QSYS_GCL_CFG_REG_1_GATE_STATE_M GENMASK(15, 8) +#define QSYS_GCL_CFG_REG_1_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) + +#define QSYS_PARAM_STATUS_REG_3_BASE_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) +#define QSYS_PARAM_STATUS_REG_3_BASE_TIME_SEC_MSB_M GENMASK(15, 0) +#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH(x) (((x) << 16) & GENMASK(31, 16)) +#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH_M GENMASK(31, 16) +#define QSYS_PARAM_STATUS_REG_3_LIST_LENGTH_X(x) (((x) & GENMASK(31, 16)) >> 16) + +#define QSYS_PARAM_STATUS_REG_8_CFG_CHG_TIME_SEC_MSB(x) ((x) & GENMASK(15, 0)) +#define QSYS_PARAM_STATUS_REG_8_CFG_CHG_TIME_SEC_MSB_M GENMASK(15, 0) +#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE(x) (((x) << 16) & GENMASK(23, 16)) +#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE_M GENMASK(23, 16) +#define QSYS_PARAM_STATUS_REG_8_OPER_GATE_STATE_X(x) (((x) & GENMASK(23, 16)) >> 16) +#define QSYS_PARAM_STATUS_REG_8_CONFIG_PENDING BIT(24) + +#define QSYS_GCL_STATUS_REG_1_GCL_ENTRY_NUM(x) ((x) & GENMASK(5, 0)) +#define QSYS_GCL_STATUS_REG_1_GCL_ENTRY_NUM_M GENMASK(5, 0) +#define QSYS_GCL_STATUS_REG_1_GATE_STATE(x) (((x) << 8) & GENMASK(15, 8)) +#define QSYS_GCL_STATUS_REG_1_GATE_STATE_M GENMASK(15, 8) +#define QSYS_GCL_STATUS_REG_1_GATE_STATE_X(x) (((x) & GENMASK(15, 8)) >> 8) + +#endif -- cgit v1.2.3-71-gd317 From 8007880a2ca97c34e7ccd1fcf12daf854b792544 Mon Sep 17 00:00:00 2001 From: Zhu Yanjun Date: Sat, 14 Dec 2019 10:51:17 +0200 Subject: net/mlx5: limit the function in local scope The function mlx5_buf_alloc_node is only used by the function in the local scope. So it is appropriate to limit this function in the local scope. Signed-off-by: Zhu Yanjun Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/alloc.c | 4 ++-- include/linux/mlx5/driver.h | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/alloc.c b/drivers/net/ethernet/mellanox/mlx5/core/alloc.c index 549f962cd86e..42198e64a7f4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/alloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/alloc.c @@ -71,8 +71,8 @@ static void *mlx5_dma_zalloc_coherent_node(struct mlx5_core_dev *dev, return cpu_handle; } -int mlx5_buf_alloc_node(struct mlx5_core_dev *dev, int size, - struct mlx5_frag_buf *buf, int node) +static int mlx5_buf_alloc_node(struct mlx5_core_dev *dev, int size, + struct mlx5_frag_buf *buf, int node) { dma_addr_t t; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 27200dea0297..59cff380f41a 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -928,8 +928,6 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev); void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health); void mlx5_drain_health_wq(struct mlx5_core_dev *dev); void mlx5_trigger_health_work(struct mlx5_core_dev *dev); -int mlx5_buf_alloc_node(struct mlx5_core_dev *dev, int size, - struct mlx5_frag_buf *buf, int node); int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, struct mlx5_frag_buf *buf); void mlx5_buf_free(struct mlx5_core_dev *dev, struct mlx5_frag_buf *buf); -- cgit v1.2.3-71-gd317 From dcfea72e79b0aa7a057c8f6024169d86a1bbc84b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 8 Jan 2020 16:59:02 -0500 Subject: net: introduce skb_list_walk_safe for skb segment walking As part of the continual effort to remove direct usage of skb->next and skb->prev, this patch adds a helper for iterating through the singly-linked variant of skb lists, which are used for lists of GSO packet. The name "skb_list_..." has been chosen to match the existing function, "kfree_skb_list, which also operates on these singly-linked lists, and the "..._walk_safe" part is the same idiom as elsewhere in the kernel. This patch removes the helper from wireguard and puts it into linux/skbuff.h, while making it a bit more robust for general usage. In particular, parenthesis are added around the macro argument usage, and it now accounts for trying to iterate through an already-null skb pointer, which will simply run the iteration zero times. This latter enhancement means it can be used to replace both do { ... } while and while (...) open-coded idioms. This should take care of these three possible usages, which match all current methods of iterations. skb_list_walk_safe(segs, skb, next) { ... } skb_list_walk_safe(skb, skb, next) { ... } skb_list_walk_safe(segs, skb, segs) { ... } Gcc appears to generate efficient code for each of these. Signed-off-by: Jason A. Donenfeld Signed-off-by: David S. Miller --- drivers/net/wireguard/device.h | 8 -------- include/linux/skbuff.h | 5 +++++ 2 files changed, 5 insertions(+), 8 deletions(-) (limited to 'include') diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h index c91f3051c5c7..b15a8be9d816 100644 --- a/drivers/net/wireguard/device.h +++ b/drivers/net/wireguard/device.h @@ -62,12 +62,4 @@ struct wg_device { int wg_device_init(void); void wg_device_uninit(void); -/* Later after the dust settles, this can be moved into include/linux/skbuff.h, - * where virtually all code that deals with GSO segs can benefit, around ~30 - * drivers as of writing. - */ -#define skb_list_walk_safe(first, skb, next) \ - for (skb = first, next = skb->next; skb; \ - skb = next, next = skb ? skb->next : NULL) - #endif /* _WG_DEVICE_H */ diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index e9133bcf0544..64e5b1be9ff5 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1478,6 +1478,11 @@ static inline void skb_mark_not_on_list(struct sk_buff *skb) skb->next = NULL; } +/* Iterate through singly-linked GSO fragments of an skb. */ +#define skb_list_walk_safe(first, skb, next) \ + for ((skb) = (first), (next) = (skb) ? (skb)->next : NULL; (skb); \ + (skb) = (next), (next) = (skb) ? (skb)->next : NULL) + static inline void skb_list_del_init(struct sk_buff *skb) { __list_del_entry(&skb->list); -- cgit v1.2.3-71-gd317 From 6181e5cb752e5de9f56fbcee3f0206a2c51f1478 Mon Sep 17 00:00:00 2001 From: Vikas Gupta Date: Thu, 2 Jan 2020 21:18:09 +0530 Subject: devlink: add support for reporter recovery completion It is possible that a reporter recovery completion do not finish successfully when recovery is triggered via devlink_health_reporter_recover as recovery could be processed in different context. In such scenario an error is returned by driver when recover hook is invoked and successful recovery completion is intimated later. Expose devlink recover done API to update recovery stats. Signed-off-by: Vikas Gupta Signed-off-by: David S. Miller --- include/net/devlink.h | 2 ++ net/core/devlink.c | 11 +++++++++-- 2 files changed, 11 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/net/devlink.h b/include/net/devlink.h index 47f87b2fcf63..453f45cc1519 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -1000,6 +1000,8 @@ int devlink_health_report(struct devlink_health_reporter *reporter, void devlink_health_reporter_state_update(struct devlink_health_reporter *reporter, enum devlink_health_reporter_state state); +void +devlink_health_reporter_recovery_done(struct devlink_health_reporter *reporter); bool devlink_is_reload_failed(const struct devlink *devlink); diff --git a/net/core/devlink.c b/net/core/devlink.c index 4c63c9a4c09e..e686ae67cd96 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -4860,6 +4860,14 @@ devlink_health_reporter_state_update(struct devlink_health_reporter *reporter, } EXPORT_SYMBOL_GPL(devlink_health_reporter_state_update); +void +devlink_health_reporter_recovery_done(struct devlink_health_reporter *reporter) +{ + reporter->recovery_count++; + reporter->last_recovery_ts = jiffies; +} +EXPORT_SYMBOL_GPL(devlink_health_reporter_recovery_done); + static int devlink_health_reporter_recover(struct devlink_health_reporter *reporter, void *priv_ctx, struct netlink_ext_ack *extack) @@ -4876,9 +4884,8 @@ devlink_health_reporter_recover(struct devlink_health_reporter *reporter, if (err) return err; - reporter->recovery_count++; + devlink_health_reporter_recovery_done(reporter); reporter->health_state = DEVLINK_HEALTH_REPORTER_STATE_HEALTHY; - reporter->last_recovery_ts = jiffies; return 0; } -- cgit v1.2.3-71-gd317 From 4d776482ecc689bdd68627985ac4cb5a6f325953 Mon Sep 17 00:00:00 2001 From: Florian Fainelli Date: Tue, 7 Jan 2020 21:06:05 -0800 Subject: net: dsa: Get information about stacked DSA protocol It is possible to stack multiple DSA switches in a way that they are not part of the tree (disjoint) but the DSA master of a switch is a DSA slave of another. When that happens switch drivers may have to know this is the case so as to determine whether their tagging protocol has a remove chance of working. This is useful for specific switch drivers such as b53 where devices have been known to be stacked in the wild without the Broadcom tag protocol supporting that feature. This allows b53 to continue supporting those devices by forcing the disabling of Broadcom tags on the outermost switches if necessary. The get_tag_protocol() function is therefore updated to gain an additional enum dsa_tag_protocol argument which denotes the current tagging protocol used by the DSA master we are attached to, else DSA_TAG_PROTO_NONE for the top of the dsa_switch_tree. Signed-off-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/dsa/b53/b53_common.c | 22 ++++++++++++++-------- drivers/net/dsa/b53/b53_priv.h | 4 +++- drivers/net/dsa/dsa_loop.c | 3 ++- drivers/net/dsa/lan9303-core.c | 3 ++- drivers/net/dsa/lantiq_gswip.c | 3 ++- drivers/net/dsa/microchip/ksz8795.c | 3 ++- drivers/net/dsa/microchip/ksz9477.c | 3 ++- drivers/net/dsa/mt7530.c | 3 ++- drivers/net/dsa/mv88e6060.c | 3 ++- drivers/net/dsa/mv88e6xxx/chip.c | 3 ++- drivers/net/dsa/ocelot/felix.c | 3 ++- drivers/net/dsa/qca/ar9331.c | 3 ++- drivers/net/dsa/qca8k.c | 3 ++- drivers/net/dsa/rtl8366rb.c | 3 ++- drivers/net/dsa/sja1105/sja1105_main.c | 3 ++- drivers/net/dsa/vitesse-vsc73xx-core.c | 3 ++- include/net/dsa.h | 3 ++- net/dsa/dsa2.c | 31 +++++++++++++++++++++++++++++-- net/dsa/dsa_priv.h | 1 + net/dsa/slave.c | 4 +--- 20 files changed, 78 insertions(+), 29 deletions(-) (limited to 'include') diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c index edacacfc9365..2b530a31ef0f 100644 --- a/drivers/net/dsa/b53/b53_common.c +++ b/drivers/net/dsa/b53/b53_common.c @@ -573,9 +573,8 @@ EXPORT_SYMBOL(b53_disable_port); void b53_brcm_hdr_setup(struct dsa_switch *ds, int port) { - bool tag_en = !(ds->ops->get_tag_protocol(ds, port) == - DSA_TAG_PROTO_NONE); struct b53_device *dev = ds->priv; + bool tag_en = !(dev->tag_protocol == DSA_TAG_PROTO_NONE); u8 hdr_ctl, val; u16 reg; @@ -1876,7 +1875,8 @@ static bool b53_can_enable_brcm_tags(struct dsa_switch *ds, int port) return ret; } -enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port) +enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port, + enum dsa_tag_protocol mprot) { struct b53_device *dev = ds->priv; @@ -1886,16 +1886,22 @@ enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port) * misses on multicast addresses (TBD). */ if (is5325(dev) || is5365(dev) || is539x(dev) || is531x5(dev) || - !b53_can_enable_brcm_tags(ds, port)) - return DSA_TAG_PROTO_NONE; + !b53_can_enable_brcm_tags(ds, port)) { + dev->tag_protocol = DSA_TAG_PROTO_NONE; + goto out; + } /* Broadcom BCM58xx chips have a flow accelerator on Port 8 * which requires us to use the prepended Broadcom tag type */ - if (dev->chip_id == BCM58XX_DEVICE_ID && port == B53_CPU_PORT) - return DSA_TAG_PROTO_BRCM_PREPEND; + if (dev->chip_id == BCM58XX_DEVICE_ID && port == B53_CPU_PORT) { + dev->tag_protocol = DSA_TAG_PROTO_BRCM_PREPEND; + goto out; + } - return DSA_TAG_PROTO_BRCM; + dev->tag_protocol = DSA_TAG_PROTO_BRCM; +out: + return dev->tag_protocol; } EXPORT_SYMBOL(b53_get_tag_protocol); diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h index 1877acf05081..3c30f3a7eb29 100644 --- a/drivers/net/dsa/b53/b53_priv.h +++ b/drivers/net/dsa/b53/b53_priv.h @@ -118,6 +118,7 @@ struct b53_device { u8 jumbo_size_reg; int reset_gpio; u8 num_arl_entries; + enum dsa_tag_protocol tag_protocol; /* used ports mask */ u16 enabled_ports; @@ -359,7 +360,8 @@ int b53_mdb_del(struct dsa_switch *ds, int port, const struct switchdev_obj_port_mdb *mdb); int b53_mirror_add(struct dsa_switch *ds, int port, struct dsa_mall_mirror_tc_entry *mirror, bool ingress); -enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port); +enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port, + enum dsa_tag_protocol mprot); void b53_mirror_del(struct dsa_switch *ds, int port, struct dsa_mall_mirror_tc_entry *mirror); int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy); diff --git a/drivers/net/dsa/dsa_loop.c b/drivers/net/dsa/dsa_loop.c index c8d7ef27fd72..fdcb70b9f0e4 100644 --- a/drivers/net/dsa/dsa_loop.c +++ b/drivers/net/dsa/dsa_loop.c @@ -61,7 +61,8 @@ struct dsa_loop_priv { static struct phy_device *phydevs[PHY_MAX_ADDR]; static enum dsa_tag_protocol dsa_loop_get_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { dev_dbg(ds->dev, "%s: port: %d\n", __func__, port); diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c index e3c333a8f45d..cc17a44dd3a8 100644 --- a/drivers/net/dsa/lan9303-core.c +++ b/drivers/net/dsa/lan9303-core.c @@ -883,7 +883,8 @@ static int lan9303_check_device(struct lan9303 *chip) /* ---------------------------- DSA -----------------------------------*/ static enum dsa_tag_protocol lan9303_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_LAN9303; } diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c index 955324968b74..0369c22fe3e1 100644 --- a/drivers/net/dsa/lantiq_gswip.c +++ b/drivers/net/dsa/lantiq_gswip.c @@ -841,7 +841,8 @@ static int gswip_setup(struct dsa_switch *ds) } static enum dsa_tag_protocol gswip_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_GSWIP; } diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c index 24a5e99f7fd5..47d65b77caf7 100644 --- a/drivers/net/dsa/microchip/ksz8795.c +++ b/drivers/net/dsa/microchip/ksz8795.c @@ -645,7 +645,8 @@ static void ksz8795_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val) } static enum dsa_tag_protocol ksz8795_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_KSZ8795; } diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c index 50ffc63d6231..9a51b8a4de5d 100644 --- a/drivers/net/dsa/microchip/ksz9477.c +++ b/drivers/net/dsa/microchip/ksz9477.c @@ -295,7 +295,8 @@ static void ksz9477_port_init_cnt(struct ksz_device *dev, int port) } static enum dsa_tag_protocol ksz9477_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { enum dsa_tag_protocol proto = DSA_TAG_PROTO_KSZ9477; struct ksz_device *dev = ds->priv; diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index ed1ec10ec62b..022466ca1c19 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -1223,7 +1223,8 @@ mt7530_port_vlan_del(struct dsa_switch *ds, int port, } static enum dsa_tag_protocol -mtk_get_tag_protocol(struct dsa_switch *ds, int port) +mtk_get_tag_protocol(struct dsa_switch *ds, int port, + enum dsa_tag_protocol mp) { struct mt7530_priv *priv = ds->priv; diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c index a5a37f47b320..24b8219fd607 100644 --- a/drivers/net/dsa/mv88e6060.c +++ b/drivers/net/dsa/mv88e6060.c @@ -43,7 +43,8 @@ static const char *mv88e6060_get_name(struct mii_bus *bus, int sw_addr) } static enum dsa_tag_protocol mv88e6060_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol m) { return DSA_TAG_PROTO_TRAILER; } diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 99816ca9e5e4..04ef4d00f134 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -5217,7 +5217,8 @@ static struct mv88e6xxx_chip *mv88e6xxx_alloc_chip(struct device *dev) } static enum dsa_tag_protocol mv88e6xxx_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol m) { struct mv88e6xxx_chip *chip = ds->priv; diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c index f072dd75cea2..feccb6201660 100644 --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c @@ -16,7 +16,8 @@ #include "felix.h" static enum dsa_tag_protocol felix_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_OCELOT; } diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c index da3bece75e21..de25f99e995a 100644 --- a/drivers/net/dsa/qca/ar9331.c +++ b/drivers/net/dsa/qca/ar9331.c @@ -347,7 +347,8 @@ static void ar9331_sw_port_disable(struct dsa_switch *ds, int port) } static enum dsa_tag_protocol ar9331_sw_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol m) { return DSA_TAG_PROTO_AR9331; } diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c index e548289df31e..9f4205b4439b 100644 --- a/drivers/net/dsa/qca8k.c +++ b/drivers/net/dsa/qca8k.c @@ -1017,7 +1017,8 @@ qca8k_port_fdb_dump(struct dsa_switch *ds, int port, } static enum dsa_tag_protocol -qca8k_get_tag_protocol(struct dsa_switch *ds, int port) +qca8k_get_tag_protocol(struct dsa_switch *ds, int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_QCA; } diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c index f5cc8b0a7c74..fd1977590cb4 100644 --- a/drivers/net/dsa/rtl8366rb.c +++ b/drivers/net/dsa/rtl8366rb.c @@ -964,7 +964,8 @@ static int rtl8366rb_setup(struct dsa_switch *ds) } static enum dsa_tag_protocol rtl8366_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { /* For now, the RTL switches are handled without any custom tags. * diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 61795833c8f5..784e6b8166a0 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -1534,7 +1534,8 @@ static int sja1105_setup_8021q_tagging(struct dsa_switch *ds, bool enabled) } static enum dsa_tag_protocol -sja1105_get_tag_protocol(struct dsa_switch *ds, int port) +sja1105_get_tag_protocol(struct dsa_switch *ds, int port, + enum dsa_tag_protocol mp) { return DSA_TAG_PROTO_SJA1105; } diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c index 69fc0110ce04..6e21a2a5cf01 100644 --- a/drivers/net/dsa/vitesse-vsc73xx-core.c +++ b/drivers/net/dsa/vitesse-vsc73xx-core.c @@ -542,7 +542,8 @@ static int vsc73xx_phy_write(struct dsa_switch *ds, int phy, int regnum, } static enum dsa_tag_protocol vsc73xx_get_tag_protocol(struct dsa_switch *ds, - int port) + int port, + enum dsa_tag_protocol mp) { /* The switch internally uses a 8 byte header with length, * source port, tag, LPA and priority. This is supposedly diff --git a/include/net/dsa.h b/include/net/dsa.h index 0c39fed8cd99..63495e3443ac 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -380,7 +380,8 @@ typedef int dsa_fdb_dump_cb_t(const unsigned char *addr, u16 vid, bool is_static, void *data); struct dsa_switch_ops { enum dsa_tag_protocol (*get_tag_protocol)(struct dsa_switch *ds, - int port); + int port, + enum dsa_tag_protocol mprot); int (*setup)(struct dsa_switch *ds); void (*teardown)(struct dsa_switch *ds); diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c index c66abbed4daf..c6d81f2baf4e 100644 --- a/net/dsa/dsa2.c +++ b/net/dsa/dsa2.c @@ -614,6 +614,32 @@ static int dsa_port_parse_dsa(struct dsa_port *dp) return 0; } +static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp, + struct net_device *master) +{ + enum dsa_tag_protocol tag_protocol = DSA_TAG_PROTO_NONE; + struct dsa_switch *mds, *ds = dp->ds; + unsigned int mdp_upstream; + struct dsa_port *mdp; + + /* It is possible to stack DSA switches onto one another when that + * happens the switch driver may want to know if its tagging protocol + * is going to work in such a configuration. + */ + if (dsa_slave_dev_check(master)) { + mdp = dsa_slave_to_port(master); + mds = mdp->ds; + mdp_upstream = dsa_upstream_port(mds, mdp->index); + tag_protocol = mds->ops->get_tag_protocol(mds, mdp_upstream, + DSA_TAG_PROTO_NONE); + } + + /* If the master device is not itself a DSA slave in a disjoint DSA + * tree, then return immediately. + */ + return ds->ops->get_tag_protocol(ds, dp->index, tag_protocol); +} + static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master) { struct dsa_switch *ds = dp->ds; @@ -621,20 +647,21 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master) const struct dsa_device_ops *tag_ops; enum dsa_tag_protocol tag_protocol; - tag_protocol = ds->ops->get_tag_protocol(ds, dp->index); + tag_protocol = dsa_get_tag_protocol(dp, master); tag_ops = dsa_tag_driver_get(tag_protocol); if (IS_ERR(tag_ops)) { if (PTR_ERR(tag_ops) == -ENOPROTOOPT) return -EPROBE_DEFER; dev_warn(ds->dev, "No tagger for this switch\n"); + dp->master = NULL; return PTR_ERR(tag_ops); } + dp->master = master; dp->type = DSA_PORT_TYPE_CPU; dp->filter = tag_ops->filter; dp->rcv = tag_ops->rcv; dp->tag_ops = tag_ops; - dp->master = master; dp->dst = dst; return 0; diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 8a162605b861..a7662e7a691d 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -157,6 +157,7 @@ extern const struct dsa_device_ops notag_netdev_ops; void dsa_slave_mii_bus_init(struct dsa_switch *ds); int dsa_slave_create(struct dsa_port *dp); void dsa_slave_destroy(struct net_device *slave_dev); +bool dsa_slave_dev_check(const struct net_device *dev); int dsa_slave_suspend(struct net_device *slave_dev); int dsa_slave_resume(struct net_device *slave_dev); int dsa_slave_register_notifier(void); diff --git a/net/dsa/slave.c b/net/dsa/slave.c index c1828bdc79dc..088c886e609e 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -22,8 +22,6 @@ #include "dsa_priv.h" -static bool dsa_slave_dev_check(const struct net_device *dev); - /* slave mii_bus handling ***************************************************/ static int dsa_slave_phy_read(struct mii_bus *bus, int addr, int reg) { @@ -1473,7 +1471,7 @@ void dsa_slave_destroy(struct net_device *slave_dev) free_netdev(slave_dev); } -static bool dsa_slave_dev_check(const struct net_device *dev) +bool dsa_slave_dev_check(const struct net_device *dev) { return dev->netdev_ops == &dsa_slave_netdev_ops; } -- cgit v1.2.3-71-gd317 From 27ae7997a66174cb8afd6a75b3989f5e0c1b9e5a Mon Sep 17 00:00:00 2001 From: Martin KaFai Lau Date: Wed, 8 Jan 2020 16:35:03 -0800 Subject: bpf: Introduce BPF_PROG_TYPE_STRUCT_OPS This patch allows the kernel's struct ops (i.e. func ptr) to be implemented in BPF. The first use case in this series is the "struct tcp_congestion_ops" which will be introduced in a latter patch. This patch introduces a new prog type BPF_PROG_TYPE_STRUCT_OPS. The BPF_PROG_TYPE_STRUCT_OPS prog is verified against a particular func ptr of a kernel struct. The attr->attach_btf_id is the btf id of a kernel struct. The attr->expected_attach_type is the member "index" of that kernel struct. The first member of a struct starts with member index 0. That will avoid ambiguity when a kernel struct has multiple func ptrs with the same func signature. For example, a BPF_PROG_TYPE_STRUCT_OPS prog is written to implement the "init" func ptr of the "struct tcp_congestion_ops". The attr->attach_btf_id is the btf id of the "struct tcp_congestion_ops" of the _running_ kernel. The attr->expected_attach_type is 3. The ctx of BPF_PROG_TYPE_STRUCT_OPS is an array of u64 args saved by arch_prepare_bpf_trampoline that will be done in the next patch when introducing BPF_MAP_TYPE_STRUCT_OPS. "struct bpf_struct_ops" is introduced as a common interface for the kernel struct that supports BPF_PROG_TYPE_STRUCT_OPS prog. The supporting kernel struct will need to implement an instance of the "struct bpf_struct_ops". The supporting kernel struct also needs to implement a bpf_verifier_ops. During BPF_PROG_LOAD, bpf_struct_ops_find() will find the right bpf_verifier_ops by searching the attr->attach_btf_id. A new "btf_struct_access" is also added to the bpf_verifier_ops such that the supporting kernel struct can optionally provide its own specific check on accessing the func arg (e.g. provide limited write access). After btf_vmlinux is parsed, the new bpf_struct_ops_init() is called to initialize some values (e.g. the btf id of the supporting kernel struct) and it can only be done once the btf_vmlinux is available. The R0 checks at BPF_EXIT is excluded for the BPF_PROG_TYPE_STRUCT_OPS prog if the return type of the prog->aux->attach_func_proto is "void". Signed-off-by: Martin KaFai Lau Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko Acked-by: Yonghong Song Link: https://lore.kernel.org/bpf/20200109003503.3855825-1-kafai@fb.com --- include/linux/bpf.h | 30 +++++++++ include/linux/bpf_types.h | 4 ++ include/linux/btf.h | 34 ++++++++++ include/uapi/linux/bpf.h | 1 + kernel/bpf/Makefile | 3 + kernel/bpf/bpf_struct_ops.c | 121 ++++++++++++++++++++++++++++++++++ kernel/bpf/bpf_struct_ops_types.h | 4 ++ kernel/bpf/btf.c | 88 +++++++++++++++++-------- kernel/bpf/syscall.c | 17 +++-- kernel/bpf/verifier.c | 134 +++++++++++++++++++++++++++++--------- 10 files changed, 373 insertions(+), 63 deletions(-) create mode 100644 kernel/bpf/bpf_struct_ops.c create mode 100644 kernel/bpf/bpf_struct_ops_types.h (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b14e51d56a82..50f3b20ae284 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -349,6 +349,10 @@ struct bpf_verifier_ops { const struct bpf_insn *src, struct bpf_insn *dst, struct bpf_prog *prog, u32 *target_size); + int (*btf_struct_access)(struct bpf_verifier_log *log, + const struct btf_type *t, int off, int size, + enum bpf_access_type atype, + u32 *next_btf_id); }; struct bpf_prog_offload_ops { @@ -668,6 +672,32 @@ struct bpf_array_aux { struct work_struct work; }; +struct btf_type; +struct btf_member; + +#define BPF_STRUCT_OPS_MAX_NR_MEMBERS 64 +struct bpf_struct_ops { + const struct bpf_verifier_ops *verifier_ops; + int (*init)(struct btf *btf); + int (*check_member)(const struct btf_type *t, + const struct btf_member *member); + const struct btf_type *type; + const char *name; + struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS]; + u32 type_id; +}; + +#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL) +const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id); +void bpf_struct_ops_init(struct btf *btf); +#else +static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) +{ + return NULL; +} +static inline void bpf_struct_ops_init(struct btf *btf) { } +#endif + struct bpf_array { struct bpf_map map; u32 elem_size; diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index 93740b3614d7..fadd243ffa2d 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -65,6 +65,10 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LIRC_MODE2, lirc_mode2, BPF_PROG_TYPE(BPF_PROG_TYPE_SK_REUSEPORT, sk_reuseport, struct sk_reuseport_md, struct sk_reuseport_kern) #endif +#if defined(CONFIG_BPF_JIT) +BPF_PROG_TYPE(BPF_PROG_TYPE_STRUCT_OPS, bpf_struct_ops, + void *, void *) +#endif BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops) diff --git a/include/linux/btf.h b/include/linux/btf.h index 79d4abc2556a..f74a09a7120b 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -53,6 +53,18 @@ bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s, u32 expected_offset, u32 expected_size); int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t); bool btf_type_is_void(const struct btf_type *t); +s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind); +const struct btf_type *btf_type_skip_modifiers(const struct btf *btf, + u32 id, u32 *res_id); +const struct btf_type *btf_type_resolve_ptr(const struct btf *btf, + u32 id, u32 *res_id); +const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf, + u32 id, u32 *res_id); + +#define for_each_member(i, struct_type, member) \ + for (i = 0, member = btf_type_member(struct_type); \ + i < btf_type_vlen(struct_type); \ + i++, member++) static inline bool btf_type_is_ptr(const struct btf_type *t) { @@ -84,6 +96,28 @@ static inline bool btf_type_is_func_proto(const struct btf_type *t) return BTF_INFO_KIND(t->info) == BTF_KIND_FUNC_PROTO; } +static inline u16 btf_type_vlen(const struct btf_type *t) +{ + return BTF_INFO_VLEN(t->info); +} + +static inline bool btf_type_kflag(const struct btf_type *t) +{ + return BTF_INFO_KFLAG(t->info); +} + +static inline u32 btf_member_bitfield_size(const struct btf_type *struct_type, + const struct btf_member *member) +{ + return btf_type_kflag(struct_type) ? BTF_MEMBER_BITFIELD_SIZE(member->offset) + : 0; +} + +static inline const struct btf_member *btf_type_member(const struct btf_type *t) +{ + return (const struct btf_member *)(t + 1); +} + #ifdef CONFIG_BPF_SYSCALL const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id); const char *btf_name_by_offset(const struct btf *btf, u32 offset); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 7df436da542d..c1eeb3e0e116 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -174,6 +174,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE, BPF_PROG_TYPE_CGROUP_SOCKOPT, BPF_PROG_TYPE_TRACING, + BPF_PROG_TYPE_STRUCT_OPS, }; enum bpf_attach_type { diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index d4f330351f87..046ce5d98033 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -27,3 +27,6 @@ endif ifeq ($(CONFIG_SYSFS),y) obj-$(CONFIG_DEBUG_INFO_BTF) += sysfs_btf.o endif +ifeq ($(CONFIG_BPF_JIT),y) +obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o +endif diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c new file mode 100644 index 000000000000..2ea68fe34c33 --- /dev/null +++ b/kernel/bpf/bpf_struct_ops.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2019 Facebook */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define BPF_STRUCT_OPS_TYPE(_name) \ +extern struct bpf_struct_ops bpf_##_name; +#include "bpf_struct_ops_types.h" +#undef BPF_STRUCT_OPS_TYPE + +enum { +#define BPF_STRUCT_OPS_TYPE(_name) BPF_STRUCT_OPS_TYPE_##_name, +#include "bpf_struct_ops_types.h" +#undef BPF_STRUCT_OPS_TYPE + __NR_BPF_STRUCT_OPS_TYPE, +}; + +static struct bpf_struct_ops * const bpf_struct_ops[] = { +#define BPF_STRUCT_OPS_TYPE(_name) \ + [BPF_STRUCT_OPS_TYPE_##_name] = &bpf_##_name, +#include "bpf_struct_ops_types.h" +#undef BPF_STRUCT_OPS_TYPE +}; + +const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = { +}; + +const struct bpf_prog_ops bpf_struct_ops_prog_ops = { +}; + +void bpf_struct_ops_init(struct btf *btf) +{ + const struct btf_member *member; + struct bpf_struct_ops *st_ops; + struct bpf_verifier_log log = {}; + const struct btf_type *t; + const char *mname; + s32 type_id; + u32 i, j; + + for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) { + st_ops = bpf_struct_ops[i]; + + type_id = btf_find_by_name_kind(btf, st_ops->name, + BTF_KIND_STRUCT); + if (type_id < 0) { + pr_warn("Cannot find struct %s in btf_vmlinux\n", + st_ops->name); + continue; + } + t = btf_type_by_id(btf, type_id); + if (btf_type_vlen(t) > BPF_STRUCT_OPS_MAX_NR_MEMBERS) { + pr_warn("Cannot support #%u members in struct %s\n", + btf_type_vlen(t), st_ops->name); + continue; + } + + for_each_member(j, t, member) { + const struct btf_type *func_proto; + + mname = btf_name_by_offset(btf, member->name_off); + if (!*mname) { + pr_warn("anon member in struct %s is not supported\n", + st_ops->name); + break; + } + + if (btf_member_bitfield_size(t, member)) { + pr_warn("bit field member %s in struct %s is not supported\n", + mname, st_ops->name); + break; + } + + func_proto = btf_type_resolve_func_ptr(btf, + member->type, + NULL); + if (func_proto && + btf_distill_func_proto(&log, btf, + func_proto, mname, + &st_ops->func_models[j])) { + pr_warn("Error in parsing func ptr %s in struct %s\n", + mname, st_ops->name); + break; + } + } + + if (j == btf_type_vlen(t)) { + if (st_ops->init(btf)) { + pr_warn("Error in init bpf_struct_ops %s\n", + st_ops->name); + } else { + st_ops->type_id = type_id; + st_ops->type = t; + } + } + } +} + +extern struct btf *btf_vmlinux; + +const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) +{ + unsigned int i; + + if (!type_id || !btf_vmlinux) + return NULL; + + for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) { + if (bpf_struct_ops[i]->type_id == type_id) + return bpf_struct_ops[i]; + } + + return NULL; +} diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h new file mode 100644 index 000000000000..7bb13ff49ec2 --- /dev/null +++ b/kernel/bpf/bpf_struct_ops_types.h @@ -0,0 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* internal file - do not include directly */ + +/* To be filled in a later patch */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 48bbde2e1c1e..12af4a1bb1a4 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -180,11 +180,6 @@ */ #define BTF_MAX_SIZE (16 * 1024 * 1024) -#define for_each_member(i, struct_type, member) \ - for (i = 0, member = btf_type_member(struct_type); \ - i < btf_type_vlen(struct_type); \ - i++, member++) - #define for_each_member_from(i, from, struct_type, member) \ for (i = from, member = btf_type_member(struct_type) + from; \ i < btf_type_vlen(struct_type); \ @@ -382,6 +377,65 @@ static bool btf_type_is_datasec(const struct btf_type *t) return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC; } +s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind) +{ + const struct btf_type *t; + const char *tname; + u32 i; + + for (i = 1; i <= btf->nr_types; i++) { + t = btf->types[i]; + if (BTF_INFO_KIND(t->info) != kind) + continue; + + tname = btf_name_by_offset(btf, t->name_off); + if (!strcmp(tname, name)) + return i; + } + + return -ENOENT; +} + +const struct btf_type *btf_type_skip_modifiers(const struct btf *btf, + u32 id, u32 *res_id) +{ + const struct btf_type *t = btf_type_by_id(btf, id); + + while (btf_type_is_modifier(t)) { + id = t->type; + t = btf_type_by_id(btf, t->type); + } + + if (res_id) + *res_id = id; + + return t; +} + +const struct btf_type *btf_type_resolve_ptr(const struct btf *btf, + u32 id, u32 *res_id) +{ + const struct btf_type *t; + + t = btf_type_skip_modifiers(btf, id, NULL); + if (!btf_type_is_ptr(t)) + return NULL; + + return btf_type_skip_modifiers(btf, t->type, res_id); +} + +const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf, + u32 id, u32 *res_id) +{ + const struct btf_type *ptype; + + ptype = btf_type_resolve_ptr(btf, id, res_id); + if (ptype && btf_type_is_func_proto(ptype)) + return ptype; + + return NULL; +} + /* Types that act only as a source, not sink or intermediate * type when resolving. */ @@ -446,16 +500,6 @@ static const char *btf_int_encoding_str(u8 encoding) return "UNKN"; } -static u16 btf_type_vlen(const struct btf_type *t) -{ - return BTF_INFO_VLEN(t->info); -} - -static bool btf_type_kflag(const struct btf_type *t) -{ - return BTF_INFO_KFLAG(t->info); -} - static u32 btf_member_bit_offset(const struct btf_type *struct_type, const struct btf_member *member) { @@ -463,13 +507,6 @@ static u32 btf_member_bit_offset(const struct btf_type *struct_type, : member->offset; } -static u32 btf_member_bitfield_size(const struct btf_type *struct_type, - const struct btf_member *member) -{ - return btf_type_kflag(struct_type) ? BTF_MEMBER_BITFIELD_SIZE(member->offset) - : 0; -} - static u32 btf_type_int(const struct btf_type *t) { return *(u32 *)(t + 1); @@ -480,11 +517,6 @@ static const struct btf_array *btf_type_array(const struct btf_type *t) return (const struct btf_array *)(t + 1); } -static const struct btf_member *btf_type_member(const struct btf_type *t) -{ - return (const struct btf_member *)(t + 1); -} - static const struct btf_enum *btf_type_enum(const struct btf_type *t) { return (const struct btf_enum *)(t + 1); @@ -3605,6 +3637,8 @@ struct btf *btf_parse_vmlinux(void) goto errout; } + bpf_struct_ops_init(btf); + btf_verifier_env_free(env); refcount_set(&btf->refcnt, 1); return btf; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 81ee8595dfee..03a02ef4c496 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1672,17 +1672,22 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, enum bpf_attach_type expected_attach_type, u32 btf_id, u32 prog_fd) { - switch (prog_type) { - case BPF_PROG_TYPE_TRACING: + if (btf_id) { if (btf_id > BTF_MAX_TYPE) return -EINVAL; - break; - default: - if (btf_id || prog_fd) + + switch (prog_type) { + case BPF_PROG_TYPE_TRACING: + case BPF_PROG_TYPE_STRUCT_OPS: + break; + default: return -EINVAL; - break; + } } + if (prog_fd && prog_type != BPF_PROG_TYPE_TRACING) + return -EINVAL; + switch (prog_type) { case BPF_PROG_TYPE_CGROUP_SOCK: switch (expected_attach_type) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d433d70022fd..586ed3c94b80 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2859,11 +2859,6 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, u32 btf_id; int ret; - if (atype != BPF_READ) { - verbose(env, "only read is supported\n"); - return -EACCES; - } - if (off < 0) { verbose(env, "R%d is ptr_%s invalid negative access: off=%d\n", @@ -2880,17 +2875,32 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, return -EACCES; } - ret = btf_struct_access(&env->log, t, off, size, atype, &btf_id); + if (env->ops->btf_struct_access) { + ret = env->ops->btf_struct_access(&env->log, t, off, size, + atype, &btf_id); + } else { + if (atype != BPF_READ) { + verbose(env, "only read is supported\n"); + return -EACCES; + } + + ret = btf_struct_access(&env->log, t, off, size, atype, + &btf_id); + } + if (ret < 0) return ret; - if (ret == SCALAR_VALUE) { - mark_reg_unknown(env, regs, value_regno); - return 0; + if (atype == BPF_READ) { + if (ret == SCALAR_VALUE) { + mark_reg_unknown(env, regs, value_regno); + return 0; + } + mark_reg_known_zero(env, regs, value_regno); + regs[value_regno].type = PTR_TO_BTF_ID; + regs[value_regno].btf_id = btf_id; } - mark_reg_known_zero(env, regs, value_regno); - regs[value_regno].type = PTR_TO_BTF_ID; - regs[value_regno].btf_id = btf_id; + return 0; } @@ -6349,8 +6359,30 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) static int check_return_code(struct bpf_verifier_env *env) { struct tnum enforce_attach_type_range = tnum_unknown; + const struct bpf_prog *prog = env->prog; struct bpf_reg_state *reg; struct tnum range = tnum_range(0, 1); + int err; + + /* The struct_ops func-ptr's return type could be "void" */ + if (env->prog->type == BPF_PROG_TYPE_STRUCT_OPS && + !prog->aux->attach_func_proto->type) + return 0; + + /* eBPF calling convetion is such that R0 is used + * to return the value from eBPF program. + * Make sure that it's readable at this time + * of bpf_exit, which means that program wrote + * something into it earlier + */ + err = check_reg_arg(env, BPF_REG_0, SRC_OP); + if (err) + return err; + + if (is_pointer_value(env, BPF_REG_0)) { + verbose(env, "R0 leaks addr as return value\n"); + return -EACCES; + } switch (env->prog->type) { case BPF_PROG_TYPE_CGROUP_SOCK_ADDR: @@ -8016,21 +8048,6 @@ static int do_check(struct bpf_verifier_env *env) if (err) return err; - /* eBPF calling convetion is such that R0 is used - * to return the value from eBPF program. - * Make sure that it's readable at this time - * of bpf_exit, which means that program wrote - * something into it earlier - */ - err = check_reg_arg(env, BPF_REG_0, SRC_OP); - if (err) - return err; - - if (is_pointer_value(env, BPF_REG_0)) { - verbose(env, "R0 leaks addr as return value\n"); - return -EACCES; - } - err = check_return_code(env); if (err) return err; @@ -8829,12 +8846,14 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) convert_ctx_access = bpf_xdp_sock_convert_ctx_access; break; case PTR_TO_BTF_ID: - if (type == BPF_WRITE) { + if (type == BPF_READ) { + insn->code = BPF_LDX | BPF_PROBE_MEM | + BPF_SIZE((insn)->code); + env->prog->aux->num_exentries++; + } else if (env->prog->type != BPF_PROG_TYPE_STRUCT_OPS) { verbose(env, "Writes through BTF pointers are not allowed\n"); return -EINVAL; } - insn->code = BPF_LDX | BPF_PROBE_MEM | BPF_SIZE((insn)->code); - env->prog->aux->num_exentries++; continue; default: continue; @@ -9502,6 +9521,58 @@ static void print_verification_stats(struct bpf_verifier_env *env) env->peak_states, env->longest_mark_read_walk); } +static int check_struct_ops_btf_id(struct bpf_verifier_env *env) +{ + const struct btf_type *t, *func_proto; + const struct bpf_struct_ops *st_ops; + const struct btf_member *member; + struct bpf_prog *prog = env->prog; + u32 btf_id, member_idx; + const char *mname; + + btf_id = prog->aux->attach_btf_id; + st_ops = bpf_struct_ops_find(btf_id); + if (!st_ops) { + verbose(env, "attach_btf_id %u is not a supported struct\n", + btf_id); + return -ENOTSUPP; + } + + t = st_ops->type; + member_idx = prog->expected_attach_type; + if (member_idx >= btf_type_vlen(t)) { + verbose(env, "attach to invalid member idx %u of struct %s\n", + member_idx, st_ops->name); + return -EINVAL; + } + + member = &btf_type_member(t)[member_idx]; + mname = btf_name_by_offset(btf_vmlinux, member->name_off); + func_proto = btf_type_resolve_func_ptr(btf_vmlinux, member->type, + NULL); + if (!func_proto) { + verbose(env, "attach to invalid member %s(@idx %u) of struct %s\n", + mname, member_idx, st_ops->name); + return -EINVAL; + } + + if (st_ops->check_member) { + int err = st_ops->check_member(t, member); + + if (err) { + verbose(env, "attach to unsupported member %s of struct %s\n", + mname, st_ops->name); + return err; + } + } + + prog->aux->attach_func_proto = func_proto; + prog->aux->attach_func_name = mname; + env->ops = st_ops->verifier_ops; + + return 0; +} + static int check_attach_btf_id(struct bpf_verifier_env *env) { struct bpf_prog *prog = env->prog; @@ -9517,6 +9588,9 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) long addr; u64 key; + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) + return check_struct_ops_btf_id(env); + if (prog->type != BPF_PROG_TYPE_TRACING) return 0; -- cgit v1.2.3-71-gd317 From 85d33df357b634649ddbe0a20fd2d0fc5732c3cb Mon Sep 17 00:00:00 2001 From: Martin KaFai Lau Date: Wed, 8 Jan 2020 16:35:05 -0800 Subject: bpf: Introduce BPF_MAP_TYPE_STRUCT_OPS The patch introduces BPF_MAP_TYPE_STRUCT_OPS. The map value is a kernel struct with its func ptr implemented in bpf prog. This new map is the interface to register/unregister/introspect a bpf implemented kernel struct. The kernel struct is actually embedded inside another new struct (or called the "value" struct in the code). For example, "struct tcp_congestion_ops" is embbeded in: struct bpf_struct_ops_tcp_congestion_ops { refcount_t refcnt; enum bpf_struct_ops_state state; struct tcp_congestion_ops data; /* <-- kernel subsystem struct here */ } The map value is "struct bpf_struct_ops_tcp_congestion_ops". The "bpftool map dump" will then be able to show the state ("inuse"/"tobefree") and the number of subsystem's refcnt (e.g. number of tcp_sock in the tcp_congestion_ops case). This "value" struct is created automatically by a macro. Having a separate "value" struct will also make extending "struct bpf_struct_ops_XYZ" easier (e.g. adding "void (*init)(void)" to "struct bpf_struct_ops_XYZ" to do some initialization works before registering the struct_ops to the kernel subsystem). The libbpf will take care of finding and populating the "struct bpf_struct_ops_XYZ" from "struct XYZ". Register a struct_ops to a kernel subsystem: 1. Load all needed BPF_PROG_TYPE_STRUCT_OPS prog(s) 2. Create a BPF_MAP_TYPE_STRUCT_OPS with attr->btf_vmlinux_value_type_id set to the btf id "struct bpf_struct_ops_tcp_congestion_ops" of the running kernel. Instead of reusing the attr->btf_value_type_id, btf_vmlinux_value_type_id s added such that attr->btf_fd can still be used as the "user" btf which could store other useful sysadmin/debug info that may be introduced in the furture, e.g. creation-date/compiler-details/map-creator...etc. 3. Create a "struct bpf_struct_ops_tcp_congestion_ops" object as described in the running kernel btf. Populate the value of this object. The function ptr should be populated with the prog fds. 4. Call BPF_MAP_UPDATE with the object created in (3) as the map value. The key is always "0". During BPF_MAP_UPDATE, the code that saves the kernel-func-ptr's args as an array of u64 is generated. BPF_MAP_UPDATE also allows the specific struct_ops to do some final checks in "st_ops->init_member()" (e.g. ensure all mandatory func ptrs are implemented). If everything looks good, it will register this kernel struct to the kernel subsystem. The map will not allow further update from this point. Unregister a struct_ops from the kernel subsystem: BPF_MAP_DELETE with key "0". Introspect a struct_ops: BPF_MAP_LOOKUP_ELEM with key "0". The map value returned will have the prog _id_ populated as the func ptr. The map value state (enum bpf_struct_ops_state) will transit from: INIT (map created) => INUSE (map updated, i.e. reg) => TOBEFREE (map value deleted, i.e. unreg) The kernel subsystem needs to call bpf_struct_ops_get() and bpf_struct_ops_put() to manage the "refcnt" in the "struct bpf_struct_ops_XYZ". This patch uses a separate refcnt for the purose of tracking the subsystem usage. Another approach is to reuse the map->refcnt and then "show" (i.e. during map_lookup) the subsystem's usage by doing map->refcnt - map->usercnt to filter out the map-fd/pinned-map usage. However, that will also tie down the future semantics of map->refcnt and map->usercnt. The very first subsystem's refcnt (during reg()) holds one count to map->refcnt. When the very last subsystem's refcnt is gone, it will also release the map->refcnt. All bpf_prog will be freed when the map->refcnt reaches 0 (i.e. during map_free()). Here is how the bpftool map command will look like: [root@arch-fb-vm1 bpf]# bpftool map show 6: struct_ops name dctcp flags 0x0 key 4B value 256B max_entries 1 memlock 4096B btf_id 6 [root@arch-fb-vm1 bpf]# bpftool map dump id 6 [{ "value": { "refcnt": { "refs": { "counter": 1 } }, "state": 1, "data": { "list": { "next": 0, "prev": 0 }, "key": 0, "flags": 2, "init": 24, "release": 0, "ssthresh": 25, "cong_avoid": 30, "set_state": 27, "cwnd_event": 28, "in_ack_event": 26, "undo_cwnd": 29, "pkts_acked": 0, "min_tso_segs": 0, "sndbuf_expand": 0, "cong_control": 0, "get_info": 0, "name": [98,112,102,95,100,99,116,99,112,0,0,0,0,0,0,0 ], "owner": 0 } } } ] Misc Notes: * bpf_struct_ops_map_sys_lookup_elem() is added for syscall lookup. It does an inplace update on "*value" instead returning a pointer to syscall.c. Otherwise, it needs a separate copy of "zero" value for the BPF_STRUCT_OPS_STATE_INIT to avoid races. * The bpf_struct_ops_map_delete_elem() is also called without preempt_disable() from map_delete_elem(). It is because the "->unreg()" may requires sleepable context, e.g. the "tcp_unregister_congestion_control()". * "const" is added to some of the existing "struct btf_func_model *" function arg to avoid a compiler warning caused by this patch. Signed-off-by: Martin KaFai Lau Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko Acked-by: Yonghong Song Link: https://lore.kernel.org/bpf/20200109003505.3855919-1-kafai@fb.com --- arch/x86/net/bpf_jit_comp.c | 18 +- include/linux/bpf.h | 49 ++++- include/linux/bpf_types.h | 3 + include/linux/btf.h | 13 ++ include/uapi/linux/bpf.h | 7 +- kernel/bpf/bpf_struct_ops.c | 511 +++++++++++++++++++++++++++++++++++++++++++- kernel/bpf/btf.c | 20 +- kernel/bpf/map_in_map.c | 3 +- kernel/bpf/syscall.c | 52 +++-- kernel/bpf/trampoline.c | 8 +- kernel/bpf/verifier.c | 5 + 11 files changed, 642 insertions(+), 47 deletions(-) (limited to 'include') diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 4c8a2d1f8470..9ba08e9abc09 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1328,7 +1328,7 @@ emit_jmp: return proglen; } -static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args, +static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_args, int stack_size) { int i; @@ -1344,7 +1344,7 @@ static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args, -(stack_size - i * 8)); } -static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args, +static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args, int stack_size) { int i; @@ -1361,7 +1361,7 @@ static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args, -(stack_size - i * 8)); } -static int invoke_bpf(struct btf_func_model *m, u8 **pprog, +static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, struct bpf_prog **progs, int prog_cnt, int stack_size) { u8 *prog = *pprog; @@ -1456,7 +1456,8 @@ static int invoke_bpf(struct btf_func_model *m, u8 **pprog, * add rsp, 8 // skip eth_type_trans's frame * ret // return to its caller */ -int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, +int arch_prepare_bpf_trampoline(void *image, void *image_end, + const struct btf_func_model *m, u32 flags, struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fexit_progs, int fexit_cnt, void *orig_call) @@ -1523,13 +1524,10 @@ int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags /* skip our return address and return to parent */ EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */ EMIT1(0xC3); /* ret */ - /* One half of the page has active running trampoline. - * Another half is an area for next trampoline. - * Make sure the trampoline generation logic doesn't overflow. - */ - if (WARN_ON_ONCE(prog - (u8 *)image > PAGE_SIZE / 2 - BPF_INSN_SAFETY)) + /* Make sure the trampoline generation logic doesn't overflow */ + if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY)) return -EFAULT; - return 0; + return prog - (u8 *)image; } static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 50f3b20ae284..a7bfe8a388c6 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -17,6 +17,7 @@ #include #include #include +#include struct bpf_verifier_env; struct bpf_verifier_log; @@ -106,6 +107,7 @@ struct bpf_map { struct btf *btf; struct bpf_map_memory memory; char name[BPF_OBJ_NAME_LEN]; + u32 btf_vmlinux_value_type_id; bool unpriv_array; bool frozen; /* write-once; write-protected by freeze_mutex */ /* 22 bytes hole */ @@ -183,7 +185,8 @@ static inline bool bpf_map_offload_neutral(const struct bpf_map *map) static inline bool bpf_map_support_seq_show(const struct bpf_map *map) { - return map->btf && map->ops->map_seq_show_elem; + return (map->btf_value_type_id || map->btf_vmlinux_value_type_id) && + map->ops->map_seq_show_elem; } int map_check_no_btf(const struct bpf_map *map, @@ -441,7 +444,8 @@ struct btf_func_model { * fentry = a set of program to run before calling original function * fexit = a set of program to run after original function */ -int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, +int arch_prepare_bpf_trampoline(void *image, void *image_end, + const struct btf_func_model *m, u32 flags, struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fexit_progs, int fexit_cnt, void *orig_call); @@ -672,6 +676,7 @@ struct bpf_array_aux { struct work_struct work; }; +struct bpf_struct_ops_value; struct btf_type; struct btf_member; @@ -681,21 +686,61 @@ struct bpf_struct_ops { int (*init)(struct btf *btf); int (*check_member)(const struct btf_type *t, const struct btf_member *member); + int (*init_member)(const struct btf_type *t, + const struct btf_member *member, + void *kdata, const void *udata); + int (*reg)(void *kdata); + void (*unreg)(void *kdata); const struct btf_type *type; + const struct btf_type *value_type; const char *name; struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS]; u32 type_id; + u32 value_id; }; #if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL) +#define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA)) const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id); void bpf_struct_ops_init(struct btf *btf); +bool bpf_struct_ops_get(const void *kdata); +void bpf_struct_ops_put(const void *kdata); +int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, + void *value); +static inline bool bpf_try_module_get(const void *data, struct module *owner) +{ + if (owner == BPF_MODULE_OWNER) + return bpf_struct_ops_get(data); + else + return try_module_get(owner); +} +static inline void bpf_module_put(const void *data, struct module *owner) +{ + if (owner == BPF_MODULE_OWNER) + bpf_struct_ops_put(data); + else + module_put(owner); +} #else static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) { return NULL; } static inline void bpf_struct_ops_init(struct btf *btf) { } +static inline bool bpf_try_module_get(const void *data, struct module *owner) +{ + return try_module_get(owner); +} +static inline void bpf_module_put(const void *data, struct module *owner) +{ + module_put(owner); +} +static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, + void *key, + void *value) +{ + return -EINVAL; +} #endif struct bpf_array { diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index fadd243ffa2d..9f326e6ef885 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -109,3 +109,6 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, reuseport_array_ops) #endif BPF_MAP_TYPE(BPF_MAP_TYPE_QUEUE, queue_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops) +#if defined(CONFIG_BPF_JIT) +BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops) +#endif diff --git a/include/linux/btf.h b/include/linux/btf.h index f74a09a7120b..881e9b76ef49 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -7,6 +7,8 @@ #include #include +#define BTF_TYPE_EMIT(type) ((void)(type *)0) + struct btf; struct btf_member; struct btf_type; @@ -60,6 +62,10 @@ const struct btf_type *btf_type_resolve_ptr(const struct btf *btf, u32 id, u32 *res_id); const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf, u32 id, u32 *res_id); +const struct btf_type * +btf_resolve_size(const struct btf *btf, const struct btf_type *type, + u32 *type_size, const struct btf_type **elem_type, + u32 *total_nelems); #define for_each_member(i, struct_type, member) \ for (i = 0, member = btf_type_member(struct_type); \ @@ -106,6 +112,13 @@ static inline bool btf_type_kflag(const struct btf_type *t) return BTF_INFO_KFLAG(t->info); } +static inline u32 btf_member_bit_offset(const struct btf_type *struct_type, + const struct btf_member *member) +{ + return btf_type_kflag(struct_type) ? BTF_MEMBER_BIT_OFFSET(member->offset) + : member->offset; +} + static inline u32 btf_member_bitfield_size(const struct btf_type *struct_type, const struct btf_member *member) { diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c1eeb3e0e116..38059880963e 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -136,6 +136,7 @@ enum bpf_map_type { BPF_MAP_TYPE_STACK, BPF_MAP_TYPE_SK_STORAGE, BPF_MAP_TYPE_DEVMAP_HASH, + BPF_MAP_TYPE_STRUCT_OPS, }; /* Note that tracing related programs such as @@ -398,6 +399,10 @@ union bpf_attr { __u32 btf_fd; /* fd pointing to a BTF type data */ __u32 btf_key_type_id; /* BTF type_id of the key */ __u32 btf_value_type_id; /* BTF type_id of the value */ + __u32 btf_vmlinux_value_type_id;/* BTF type_id of a kernel- + * struct stored as the + * map value + */ }; struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */ @@ -3350,7 +3355,7 @@ struct bpf_map_info { __u32 map_flags; char name[BPF_OBJ_NAME_LEN]; __u32 ifindex; - __u32 :32; + __u32 btf_vmlinux_value_type_id; __u64 netns_dev; __u64 netns_ino; __u32 btf_id; diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 2ea68fe34c33..ddf48f49914b 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -9,9 +9,68 @@ #include #include #include +#include +enum bpf_struct_ops_state { + BPF_STRUCT_OPS_STATE_INIT, + BPF_STRUCT_OPS_STATE_INUSE, + BPF_STRUCT_OPS_STATE_TOBEFREE, +}; + +#define BPF_STRUCT_OPS_COMMON_VALUE \ + refcount_t refcnt; \ + enum bpf_struct_ops_state state + +struct bpf_struct_ops_value { + BPF_STRUCT_OPS_COMMON_VALUE; + char data[0] ____cacheline_aligned_in_smp; +}; + +struct bpf_struct_ops_map { + struct bpf_map map; + const struct bpf_struct_ops *st_ops; + /* protect map_update */ + struct mutex lock; + /* progs has all the bpf_prog that is populated + * to the func ptr of the kernel's struct + * (in kvalue.data). + */ + struct bpf_prog **progs; + /* image is a page that has all the trampolines + * that stores the func args before calling the bpf_prog. + * A PAGE_SIZE "image" is enough to store all trampoline for + * "progs[]". + */ + void *image; + /* uvalue->data stores the kernel struct + * (e.g. tcp_congestion_ops) that is more useful + * to userspace than the kvalue. For example, + * the bpf_prog's id is stored instead of the kernel + * address of a func ptr. + */ + struct bpf_struct_ops_value *uvalue; + /* kvalue.data stores the actual kernel's struct + * (e.g. tcp_congestion_ops) that will be + * registered to the kernel subsystem. + */ + struct bpf_struct_ops_value kvalue; +}; + +#define VALUE_PREFIX "bpf_struct_ops_" +#define VALUE_PREFIX_LEN (sizeof(VALUE_PREFIX) - 1) + +/* bpf_struct_ops_##_name (e.g. bpf_struct_ops_tcp_congestion_ops) is + * the map's value exposed to the userspace and its btf-type-id is + * stored at the map->btf_vmlinux_value_type_id. + * + */ #define BPF_STRUCT_OPS_TYPE(_name) \ -extern struct bpf_struct_ops bpf_##_name; +extern struct bpf_struct_ops bpf_##_name; \ + \ +struct bpf_struct_ops_##_name { \ + BPF_STRUCT_OPS_COMMON_VALUE; \ + struct _name data ____cacheline_aligned_in_smp; \ +}; #include "bpf_struct_ops_types.h" #undef BPF_STRUCT_OPS_TYPE @@ -35,19 +94,50 @@ const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = { const struct bpf_prog_ops bpf_struct_ops_prog_ops = { }; +static const struct btf_type *module_type; + void bpf_struct_ops_init(struct btf *btf) { + s32 type_id, value_id, module_id; const struct btf_member *member; struct bpf_struct_ops *st_ops; struct bpf_verifier_log log = {}; const struct btf_type *t; + char value_name[128]; const char *mname; - s32 type_id; u32 i, j; + /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */ +#define BPF_STRUCT_OPS_TYPE(_name) BTF_TYPE_EMIT(struct bpf_struct_ops_##_name); +#include "bpf_struct_ops_types.h" +#undef BPF_STRUCT_OPS_TYPE + + module_id = btf_find_by_name_kind(btf, "module", BTF_KIND_STRUCT); + if (module_id < 0) { + pr_warn("Cannot find struct module in btf_vmlinux\n"); + return; + } + module_type = btf_type_by_id(btf, module_id); + for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) { st_ops = bpf_struct_ops[i]; + if (strlen(st_ops->name) + VALUE_PREFIX_LEN >= + sizeof(value_name)) { + pr_warn("struct_ops name %s is too long\n", + st_ops->name); + continue; + } + sprintf(value_name, "%s%s", VALUE_PREFIX, st_ops->name); + + value_id = btf_find_by_name_kind(btf, value_name, + BTF_KIND_STRUCT); + if (value_id < 0) { + pr_warn("Cannot find struct %s in btf_vmlinux\n", + value_name); + continue; + } + type_id = btf_find_by_name_kind(btf, st_ops->name, BTF_KIND_STRUCT); if (type_id < 0) { @@ -98,6 +188,9 @@ void bpf_struct_ops_init(struct btf *btf) } else { st_ops->type_id = type_id; st_ops->type = t; + st_ops->value_id = value_id; + st_ops->value_type = btf_type_by_id(btf, + value_id); } } } @@ -105,6 +198,22 @@ void bpf_struct_ops_init(struct btf *btf) extern struct btf *btf_vmlinux; +static const struct bpf_struct_ops * +bpf_struct_ops_find_value(u32 value_id) +{ + unsigned int i; + + if (!value_id || !btf_vmlinux) + return NULL; + + for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) { + if (bpf_struct_ops[i]->value_id == value_id) + return bpf_struct_ops[i]; + } + + return NULL; +} + const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) { unsigned int i; @@ -119,3 +228,401 @@ const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) return NULL; } + +static int bpf_struct_ops_map_get_next_key(struct bpf_map *map, void *key, + void *next_key) +{ + if (key && *(u32 *)key == 0) + return -ENOENT; + + *(u32 *)next_key = 0; + return 0; +} + +int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, + void *value) +{ + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; + struct bpf_struct_ops_value *uvalue, *kvalue; + enum bpf_struct_ops_state state; + + if (unlikely(*(u32 *)key != 0)) + return -ENOENT; + + kvalue = &st_map->kvalue; + /* Pair with smp_store_release() during map_update */ + state = smp_load_acquire(&kvalue->state); + if (state == BPF_STRUCT_OPS_STATE_INIT) { + memset(value, 0, map->value_size); + return 0; + } + + /* No lock is needed. state and refcnt do not need + * to be updated together under atomic context. + */ + uvalue = (struct bpf_struct_ops_value *)value; + memcpy(uvalue, st_map->uvalue, map->value_size); + uvalue->state = state; + refcount_set(&uvalue->refcnt, refcount_read(&kvalue->refcnt)); + + return 0; +} + +static void *bpf_struct_ops_map_lookup_elem(struct bpf_map *map, void *key) +{ + return ERR_PTR(-EINVAL); +} + +static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map) +{ + const struct btf_type *t = st_map->st_ops->type; + u32 i; + + for (i = 0; i < btf_type_vlen(t); i++) { + if (st_map->progs[i]) { + bpf_prog_put(st_map->progs[i]); + st_map->progs[i] = NULL; + } + } +} + +static int check_zero_holes(const struct btf_type *t, void *data) +{ + const struct btf_member *member; + u32 i, moff, msize, prev_mend = 0; + const struct btf_type *mtype; + + for_each_member(i, t, member) { + moff = btf_member_bit_offset(t, member) / 8; + if (moff > prev_mend && + memchr_inv(data + prev_mend, 0, moff - prev_mend)) + return -EINVAL; + + mtype = btf_type_by_id(btf_vmlinux, member->type); + mtype = btf_resolve_size(btf_vmlinux, mtype, &msize, + NULL, NULL); + if (IS_ERR(mtype)) + return PTR_ERR(mtype); + prev_mend = moff + msize; + } + + if (t->size > prev_mend && + memchr_inv(data + prev_mend, 0, t->size - prev_mend)) + return -EINVAL; + + return 0; +} + +static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) +{ + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; + const struct bpf_struct_ops *st_ops = st_map->st_ops; + struct bpf_struct_ops_value *uvalue, *kvalue; + const struct btf_member *member; + const struct btf_type *t = st_ops->type; + void *udata, *kdata; + int prog_fd, err = 0; + void *image; + u32 i; + + if (flags) + return -EINVAL; + + if (*(u32 *)key != 0) + return -E2BIG; + + err = check_zero_holes(st_ops->value_type, value); + if (err) + return err; + + uvalue = (struct bpf_struct_ops_value *)value; + err = check_zero_holes(t, uvalue->data); + if (err) + return err; + + if (uvalue->state || refcount_read(&uvalue->refcnt)) + return -EINVAL; + + uvalue = (struct bpf_struct_ops_value *)st_map->uvalue; + kvalue = (struct bpf_struct_ops_value *)&st_map->kvalue; + + mutex_lock(&st_map->lock); + + if (kvalue->state != BPF_STRUCT_OPS_STATE_INIT) { + err = -EBUSY; + goto unlock; + } + + memcpy(uvalue, value, map->value_size); + + udata = &uvalue->data; + kdata = &kvalue->data; + image = st_map->image; + + for_each_member(i, t, member) { + const struct btf_type *mtype, *ptype; + struct bpf_prog *prog; + u32 moff; + + moff = btf_member_bit_offset(t, member) / 8; + ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL); + if (ptype == module_type) { + if (*(void **)(udata + moff)) + goto reset_unlock; + *(void **)(kdata + moff) = BPF_MODULE_OWNER; + continue; + } + + err = st_ops->init_member(t, member, kdata, udata); + if (err < 0) + goto reset_unlock; + + /* The ->init_member() has handled this member */ + if (err > 0) + continue; + + /* If st_ops->init_member does not handle it, + * we will only handle func ptrs and zero-ed members + * here. Reject everything else. + */ + + /* All non func ptr member must be 0 */ + if (!ptype || !btf_type_is_func_proto(ptype)) { + u32 msize; + + mtype = btf_type_by_id(btf_vmlinux, member->type); + mtype = btf_resolve_size(btf_vmlinux, mtype, &msize, + NULL, NULL); + if (IS_ERR(mtype)) { + err = PTR_ERR(mtype); + goto reset_unlock; + } + + if (memchr_inv(udata + moff, 0, msize)) { + err = -EINVAL; + goto reset_unlock; + } + + continue; + } + + prog_fd = (int)(*(unsigned long *)(udata + moff)); + /* Similar check as the attr->attach_prog_fd */ + if (!prog_fd) + continue; + + prog = bpf_prog_get(prog_fd); + if (IS_ERR(prog)) { + err = PTR_ERR(prog); + goto reset_unlock; + } + st_map->progs[i] = prog; + + if (prog->type != BPF_PROG_TYPE_STRUCT_OPS || + prog->aux->attach_btf_id != st_ops->type_id || + prog->expected_attach_type != i) { + err = -EINVAL; + goto reset_unlock; + } + + err = arch_prepare_bpf_trampoline(image, + st_map->image + PAGE_SIZE, + &st_ops->func_models[i], 0, + &prog, 1, NULL, 0, NULL); + if (err < 0) + goto reset_unlock; + + *(void **)(kdata + moff) = image; + image += err; + + /* put prog_id to udata */ + *(unsigned long *)(udata + moff) = prog->aux->id; + } + + refcount_set(&kvalue->refcnt, 1); + bpf_map_inc(map); + + set_memory_ro((long)st_map->image, 1); + set_memory_x((long)st_map->image, 1); + err = st_ops->reg(kdata); + if (likely(!err)) { + /* Pair with smp_load_acquire() during lookup_elem(). + * It ensures the above udata updates (e.g. prog->aux->id) + * can be seen once BPF_STRUCT_OPS_STATE_INUSE is set. + */ + smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE); + goto unlock; + } + + /* Error during st_ops->reg(). It is very unlikely since + * the above init_member() should have caught it earlier + * before reg(). The only possibility is if there was a race + * in registering the struct_ops (under the same name) to + * a sub-system through different struct_ops's maps. + */ + set_memory_nx((long)st_map->image, 1); + set_memory_rw((long)st_map->image, 1); + bpf_map_put(map); + +reset_unlock: + bpf_struct_ops_map_put_progs(st_map); + memset(uvalue, 0, map->value_size); + memset(kvalue, 0, map->value_size); +unlock: + mutex_unlock(&st_map->lock); + return err; +} + +static int bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key) +{ + enum bpf_struct_ops_state prev_state; + struct bpf_struct_ops_map *st_map; + + st_map = (struct bpf_struct_ops_map *)map; + prev_state = cmpxchg(&st_map->kvalue.state, + BPF_STRUCT_OPS_STATE_INUSE, + BPF_STRUCT_OPS_STATE_TOBEFREE); + if (prev_state == BPF_STRUCT_OPS_STATE_INUSE) { + st_map->st_ops->unreg(&st_map->kvalue.data); + if (refcount_dec_and_test(&st_map->kvalue.refcnt)) + bpf_map_put(map); + } + + return 0; +} + +static void bpf_struct_ops_map_seq_show_elem(struct bpf_map *map, void *key, + struct seq_file *m) +{ + void *value; + + value = bpf_struct_ops_map_lookup_elem(map, key); + if (!value) + return; + + btf_type_seq_show(btf_vmlinux, map->btf_vmlinux_value_type_id, + value, m); + seq_puts(m, "\n"); +} + +static void bpf_struct_ops_map_free(struct bpf_map *map) +{ + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; + + if (st_map->progs) + bpf_struct_ops_map_put_progs(st_map); + bpf_map_area_free(st_map->progs); + bpf_jit_free_exec(st_map->image); + bpf_map_area_free(st_map->uvalue); + bpf_map_area_free(st_map); +} + +static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr) +{ + if (attr->key_size != sizeof(unsigned int) || attr->max_entries != 1 || + attr->map_flags || !attr->btf_vmlinux_value_type_id) + return -EINVAL; + return 0; +} + +static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) +{ + const struct bpf_struct_ops *st_ops; + size_t map_total_size, st_map_size; + struct bpf_struct_ops_map *st_map; + const struct btf_type *t, *vt; + struct bpf_map_memory mem; + struct bpf_map *map; + int err; + + if (!capable(CAP_SYS_ADMIN)) + return ERR_PTR(-EPERM); + + st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id); + if (!st_ops) + return ERR_PTR(-ENOTSUPP); + + vt = st_ops->value_type; + if (attr->value_size != vt->size) + return ERR_PTR(-EINVAL); + + t = st_ops->type; + + st_map_size = sizeof(*st_map) + + /* kvalue stores the + * struct bpf_struct_ops_tcp_congestions_ops + */ + (vt->size - sizeof(struct bpf_struct_ops_value)); + map_total_size = st_map_size + + /* uvalue */ + sizeof(vt->size) + + /* struct bpf_progs **progs */ + btf_type_vlen(t) * sizeof(struct bpf_prog *); + err = bpf_map_charge_init(&mem, map_total_size); + if (err < 0) + return ERR_PTR(err); + + st_map = bpf_map_area_alloc(st_map_size, NUMA_NO_NODE); + if (!st_map) { + bpf_map_charge_finish(&mem); + return ERR_PTR(-ENOMEM); + } + st_map->st_ops = st_ops; + map = &st_map->map; + + st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE); + st_map->progs = + bpf_map_area_alloc(btf_type_vlen(t) * sizeof(struct bpf_prog *), + NUMA_NO_NODE); + st_map->image = bpf_jit_alloc_exec(PAGE_SIZE); + if (!st_map->uvalue || !st_map->progs || !st_map->image) { + bpf_struct_ops_map_free(map); + bpf_map_charge_finish(&mem); + return ERR_PTR(-ENOMEM); + } + + mutex_init(&st_map->lock); + set_vm_flush_reset_perms(st_map->image); + bpf_map_init_from_attr(map, attr); + bpf_map_charge_move(&map->memory, &mem); + + return map; +} + +const struct bpf_map_ops bpf_struct_ops_map_ops = { + .map_alloc_check = bpf_struct_ops_map_alloc_check, + .map_alloc = bpf_struct_ops_map_alloc, + .map_free = bpf_struct_ops_map_free, + .map_get_next_key = bpf_struct_ops_map_get_next_key, + .map_lookup_elem = bpf_struct_ops_map_lookup_elem, + .map_delete_elem = bpf_struct_ops_map_delete_elem, + .map_update_elem = bpf_struct_ops_map_update_elem, + .map_seq_show_elem = bpf_struct_ops_map_seq_show_elem, +}; + +/* "const void *" because some subsystem is + * passing a const (e.g. const struct tcp_congestion_ops *) + */ +bool bpf_struct_ops_get(const void *kdata) +{ + struct bpf_struct_ops_value *kvalue; + + kvalue = container_of(kdata, struct bpf_struct_ops_value, data); + + return refcount_inc_not_zero(&kvalue->refcnt); +} + +void bpf_struct_ops_put(const void *kdata) +{ + struct bpf_struct_ops_value *kvalue; + + kvalue = container_of(kdata, struct bpf_struct_ops_value, data); + if (refcount_dec_and_test(&kvalue->refcnt)) { + struct bpf_struct_ops_map *st_map; + + st_map = container_of(kvalue, struct bpf_struct_ops_map, + kvalue); + bpf_map_put(&st_map->map); + } +} diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 12af4a1bb1a4..81d9cf75cacd 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -500,13 +500,6 @@ static const char *btf_int_encoding_str(u8 encoding) return "UNKN"; } -static u32 btf_member_bit_offset(const struct btf_type *struct_type, - const struct btf_member *member) -{ - return btf_type_kflag(struct_type) ? BTF_MEMBER_BIT_OFFSET(member->offset) - : member->offset; -} - static u32 btf_type_int(const struct btf_type *t) { return *(u32 *)(t + 1); @@ -1089,7 +1082,7 @@ static const struct resolve_vertex *env_stack_peak(struct btf_verifier_env *env) * *elem_type: same as return type ("struct X") * *total_nelems: 1 */ -static const struct btf_type * +const struct btf_type * btf_resolve_size(const struct btf *btf, const struct btf_type *type, u32 *type_size, const struct btf_type **elem_type, u32 *total_nelems) @@ -1143,8 +1136,10 @@ resolved: return ERR_PTR(-EINVAL); *type_size = nelems * size; - *total_nelems = nelems; - *elem_type = type; + if (total_nelems) + *total_nelems = nelems; + if (elem_type) + *elem_type = type; return array_type ? : type; } @@ -1858,7 +1853,10 @@ static void btf_modifier_seq_show(const struct btf *btf, u32 type_id, void *data, u8 bits_offset, struct seq_file *m) { - t = btf_type_id_resolve(btf, &type_id); + if (btf->resolved_ids) + t = btf_type_id_resolve(btf, &type_id); + else + t = btf_type_skip_modifiers(btf, type_id, NULL); btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m); } diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c index 5e9366b33f0f..b3c48d1533cb 100644 --- a/kernel/bpf/map_in_map.c +++ b/kernel/bpf/map_in_map.c @@ -22,7 +22,8 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) */ if (inner_map->map_type == BPF_MAP_TYPE_PROG_ARRAY || inner_map->map_type == BPF_MAP_TYPE_CGROUP_STORAGE || - inner_map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) { + inner_map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE || + inner_map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { fdput(f); return ERR_PTR(-ENOTSUPP); } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 03a02ef4c496..f9db72a96ec0 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -628,7 +628,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, return ret; } -#define BPF_MAP_CREATE_LAST_FIELD btf_value_type_id +#define BPF_MAP_CREATE_LAST_FIELD btf_vmlinux_value_type_id /* called via syscall */ static int map_create(union bpf_attr *attr) { @@ -642,6 +642,14 @@ static int map_create(union bpf_attr *attr) if (err) return -EINVAL; + if (attr->btf_vmlinux_value_type_id) { + if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS || + attr->btf_key_type_id || attr->btf_value_type_id) + return -EINVAL; + } else if (attr->btf_key_type_id && !attr->btf_value_type_id) { + return -EINVAL; + } + f_flags = bpf_get_file_flag(attr->map_flags); if (f_flags < 0) return f_flags; @@ -664,32 +672,35 @@ static int map_create(union bpf_attr *attr) atomic64_set(&map->usercnt, 1); mutex_init(&map->freeze_mutex); - if (attr->btf_key_type_id || attr->btf_value_type_id) { + map->spin_lock_off = -EINVAL; + if (attr->btf_key_type_id || attr->btf_value_type_id || + /* Even the map's value is a kernel's struct, + * the bpf_prog.o must have BTF to begin with + * to figure out the corresponding kernel's + * counter part. Thus, attr->btf_fd has + * to be valid also. + */ + attr->btf_vmlinux_value_type_id) { struct btf *btf; - if (!attr->btf_value_type_id) { - err = -EINVAL; - goto free_map; - } - btf = btf_get_by_fd(attr->btf_fd); if (IS_ERR(btf)) { err = PTR_ERR(btf); goto free_map; } + map->btf = btf; - err = map_check_btf(map, btf, attr->btf_key_type_id, - attr->btf_value_type_id); - if (err) { - btf_put(btf); - goto free_map; + if (attr->btf_value_type_id) { + err = map_check_btf(map, btf, attr->btf_key_type_id, + attr->btf_value_type_id); + if (err) + goto free_map; } - map->btf = btf; map->btf_key_type_id = attr->btf_key_type_id; map->btf_value_type_id = attr->btf_value_type_id; - } else { - map->spin_lock_off = -EINVAL; + map->btf_vmlinux_value_type_id = + attr->btf_vmlinux_value_type_id; } err = security_bpf_map_alloc(map); @@ -888,6 +899,9 @@ static int map_lookup_elem(union bpf_attr *attr) } else if (map->map_type == BPF_MAP_TYPE_QUEUE || map->map_type == BPF_MAP_TYPE_STACK) { err = map->ops->map_peek_elem(map, value); + } else if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { + /* struct_ops map requires directly updating "value" */ + err = bpf_struct_ops_map_sys_lookup_elem(map, key, value); } else { rcu_read_lock(); if (map->ops->map_lookup_elem_sys_only) @@ -1003,7 +1017,8 @@ static int map_update_elem(union bpf_attr *attr) goto out; } else if (map->map_type == BPF_MAP_TYPE_CPUMAP || map->map_type == BPF_MAP_TYPE_SOCKHASH || - map->map_type == BPF_MAP_TYPE_SOCKMAP) { + map->map_type == BPF_MAP_TYPE_SOCKMAP || + map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { err = map->ops->map_update_elem(map, key, value, attr->flags); goto out; } else if (IS_FD_PROG_ARRAY(map)) { @@ -1092,7 +1107,9 @@ static int map_delete_elem(union bpf_attr *attr) if (bpf_map_is_dev_bound(map)) { err = bpf_map_offload_delete_elem(map, key); goto out; - } else if (IS_FD_PROG_ARRAY(map)) { + } else if (IS_FD_PROG_ARRAY(map) || + map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { + /* These maps require sleepable context */ err = map->ops->map_delete_elem(map, key); goto out; } @@ -2822,6 +2839,7 @@ static int bpf_map_get_info_by_fd(struct bpf_map *map, info.btf_key_type_id = map->btf_key_type_id; info.btf_value_type_id = map->btf_value_type_id; } + info.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id; if (bpf_map_is_dev_bound(map)) { err = bpf_map_offload_info_fill(&info, map); diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 505f4e4b31d2..79a04417050d 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -160,11 +160,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) if (fexit_cnt) flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME; - err = arch_prepare_bpf_trampoline(new_image, &tr->func.model, flags, + err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2, + &tr->func.model, flags, fentry, fentry_cnt, fexit, fexit_cnt, tr->func.addr); - if (err) + if (err < 0) goto out; if (tr->selector) @@ -296,7 +297,8 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start) } int __weak -arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, +arch_prepare_bpf_trampoline(void *image, void *image_end, + const struct btf_func_model *m, u32 flags, struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fexit_progs, int fexit_cnt, void *orig_call) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 586ed3c94b80..f5af759a8a5f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8155,6 +8155,11 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, return -EINVAL; } + if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { + verbose(env, "bpf_struct_ops map cannot be used in prog\n"); + return -EINVAL; + } + return 0; } -- cgit v1.2.3-71-gd317 From 0baf26b0fcd74bbfcef53c5d5e8bad2b99c8d0d2 Mon Sep 17 00:00:00 2001 From: Martin KaFai Lau Date: Wed, 8 Jan 2020 16:35:08 -0800 Subject: bpf: tcp: Support tcp_congestion_ops in bpf This patch makes "struct tcp_congestion_ops" to be the first user of BPF STRUCT_OPS. It allows implementing a tcp_congestion_ops in bpf. The BPF implemented tcp_congestion_ops can be used like regular kernel tcp-cc through sysctl and setsockopt. e.g. [root@arch-fb-vm1 bpf]# sysctl -a | egrep congestion net.ipv4.tcp_allowed_congestion_control = reno cubic bpf_cubic net.ipv4.tcp_available_congestion_control = reno bic cubic bpf_cubic net.ipv4.tcp_congestion_control = bpf_cubic There has been attempt to move the TCP CC to the user space (e.g. CCP in TCP). The common arguments are faster turn around, get away from long-tail kernel versions in production...etc, which are legit points. BPF has been the continuous effort to join both kernel and userspace upsides together (e.g. XDP to gain the performance advantage without bypassing the kernel). The recent BPF advancements (in particular BTF-aware verifier, BPF trampoline, BPF CO-RE...) made implementing kernel struct ops (e.g. tcp cc) possible in BPF. It allows a faster turnaround for testing algorithm in the production while leveraging the existing (and continue growing) BPF feature/framework instead of building one specifically for userspace TCP CC. This patch allows write access to a few fields in tcp-sock (in bpf_tcp_ca_btf_struct_access()). The optional "get_info" is unsupported now. It can be added later. One possible way is to output the info with a btf-id to describe the content. Signed-off-by: Martin KaFai Lau Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko Acked-by: Yonghong Song Link: https://lore.kernel.org/bpf/20200109003508.3856115-1-kafai@fb.com --- include/linux/filter.h | 2 + include/net/tcp.h | 2 + kernel/bpf/bpf_struct_ops_types.h | 7 +- net/core/filter.c | 2 +- net/ipv4/Makefile | 4 + net/ipv4/bpf_tcp_ca.c | 230 ++++++++++++++++++++++++++++++++++++++ net/ipv4/tcp_cong.c | 16 +-- net/ipv4/tcp_ipv4.c | 6 +- net/ipv4/tcp_minisocks.c | 4 +- net/ipv4/tcp_output.c | 4 +- 10 files changed, 261 insertions(+), 16 deletions(-) create mode 100644 net/ipv4/bpf_tcp_ca.c (limited to 'include') diff --git a/include/linux/filter.h b/include/linux/filter.h index 70e6dd960bca..a366a0b64a57 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -843,6 +843,8 @@ int bpf_prog_create(struct bpf_prog **pfp, struct sock_fprog_kern *fprog); int bpf_prog_create_from_user(struct bpf_prog **pfp, struct sock_fprog *fprog, bpf_aux_classic_check_t trans, bool save_orig); void bpf_prog_destroy(struct bpf_prog *fp); +const struct bpf_func_proto * +bpf_base_func_proto(enum bpf_func_id func_id); int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk); int sk_attach_bpf(u32 ufd, struct sock *sk); diff --git a/include/net/tcp.h b/include/net/tcp.h index 7df37e2fddca..9dd975be7fdf 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1007,6 +1007,7 @@ enum tcp_ca_ack_event_flags { #define TCP_CONG_NON_RESTRICTED 0x1 /* Requires ECN/ECT set on all packets */ #define TCP_CONG_NEEDS_ECN 0x2 +#define TCP_CONG_MASK (TCP_CONG_NON_RESTRICTED | TCP_CONG_NEEDS_ECN) union tcp_cc_info; @@ -1101,6 +1102,7 @@ u32 tcp_reno_undo_cwnd(struct sock *sk); void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked); extern struct tcp_congestion_ops tcp_reno; +struct tcp_congestion_ops *tcp_ca_find(const char *name); struct tcp_congestion_ops *tcp_ca_find_key(u32 key); u32 tcp_ca_get_key_by_name(struct net *net, const char *name, bool *ecn_ca); #ifdef CONFIG_INET diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h index 7bb13ff49ec2..066d83ea1c99 100644 --- a/kernel/bpf/bpf_struct_ops_types.h +++ b/kernel/bpf/bpf_struct_ops_types.h @@ -1,4 +1,9 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* internal file - do not include directly */ -/* To be filled in a later patch */ +#ifdef CONFIG_BPF_JIT +#ifdef CONFIG_INET +#include +BPF_STRUCT_OPS_TYPE(tcp_congestion_ops) +#endif +#endif diff --git a/net/core/filter.c b/net/core/filter.c index 42fd17c48c5f..a702761ef369 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5935,7 +5935,7 @@ bool bpf_helper_changes_pkt_data(void *func) return false; } -static const struct bpf_func_proto * +const struct bpf_func_proto * bpf_base_func_proto(enum bpf_func_id func_id) { switch (func_id) { diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile index d57ecfaf89d4..9d97bace13c8 100644 --- a/net/ipv4/Makefile +++ b/net/ipv4/Makefile @@ -65,3 +65,7 @@ obj-$(CONFIG_NETLABEL) += cipso_ipv4.o obj-$(CONFIG_XFRM) += xfrm4_policy.o xfrm4_state.o xfrm4_input.o \ xfrm4_output.o xfrm4_protocol.o + +ifeq ($(CONFIG_BPF_JIT),y) +obj-$(CONFIG_BPF_SYSCALL) += bpf_tcp_ca.o +endif diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c new file mode 100644 index 000000000000..9c7745b958d7 --- /dev/null +++ b/net/ipv4/bpf_tcp_ca.c @@ -0,0 +1,230 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 Facebook */ + +#include +#include +#include +#include +#include +#include + +static u32 optional_ops[] = { + offsetof(struct tcp_congestion_ops, init), + offsetof(struct tcp_congestion_ops, release), + offsetof(struct tcp_congestion_ops, set_state), + offsetof(struct tcp_congestion_ops, cwnd_event), + offsetof(struct tcp_congestion_ops, in_ack_event), + offsetof(struct tcp_congestion_ops, pkts_acked), + offsetof(struct tcp_congestion_ops, min_tso_segs), + offsetof(struct tcp_congestion_ops, sndbuf_expand), + offsetof(struct tcp_congestion_ops, cong_control), +}; + +static u32 unsupported_ops[] = { + offsetof(struct tcp_congestion_ops, get_info), +}; + +static const struct btf_type *tcp_sock_type; +static u32 tcp_sock_id, sock_id; + +static int bpf_tcp_ca_init(struct btf *btf) +{ + s32 type_id; + + type_id = btf_find_by_name_kind(btf, "sock", BTF_KIND_STRUCT); + if (type_id < 0) + return -EINVAL; + sock_id = type_id; + + type_id = btf_find_by_name_kind(btf, "tcp_sock", BTF_KIND_STRUCT); + if (type_id < 0) + return -EINVAL; + tcp_sock_id = type_id; + tcp_sock_type = btf_type_by_id(btf, tcp_sock_id); + + return 0; +} + +static bool is_optional(u32 member_offset) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(optional_ops); i++) { + if (member_offset == optional_ops[i]) + return true; + } + + return false; +} + +static bool is_unsupported(u32 member_offset) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(unsupported_ops); i++) { + if (member_offset == unsupported_ops[i]) + return true; + } + + return false; +} + +extern struct btf *btf_vmlinux; + +static bool bpf_tcp_ca_is_valid_access(int off, int size, + enum bpf_access_type type, + const struct bpf_prog *prog, + struct bpf_insn_access_aux *info) +{ + if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS) + return false; + if (type != BPF_READ) + return false; + if (off % size != 0) + return false; + + if (!btf_ctx_access(off, size, type, prog, info)) + return false; + + if (info->reg_type == PTR_TO_BTF_ID && info->btf_id == sock_id) + /* promote it to tcp_sock */ + info->btf_id = tcp_sock_id; + + return true; +} + +static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, + const struct btf_type *t, int off, + int size, enum bpf_access_type atype, + u32 *next_btf_id) +{ + size_t end; + + if (atype == BPF_READ) + return btf_struct_access(log, t, off, size, atype, next_btf_id); + + if (t != tcp_sock_type) { + bpf_log(log, "only read is supported\n"); + return -EACCES; + } + + switch (off) { + case bpf_ctx_range(struct inet_connection_sock, icsk_ca_priv): + end = offsetofend(struct inet_connection_sock, icsk_ca_priv); + break; + case offsetof(struct inet_connection_sock, icsk_ack.pending): + end = offsetofend(struct inet_connection_sock, + icsk_ack.pending); + break; + case offsetof(struct tcp_sock, snd_cwnd): + end = offsetofend(struct tcp_sock, snd_cwnd); + break; + case offsetof(struct tcp_sock, snd_cwnd_cnt): + end = offsetofend(struct tcp_sock, snd_cwnd_cnt); + break; + case offsetof(struct tcp_sock, snd_ssthresh): + end = offsetofend(struct tcp_sock, snd_ssthresh); + break; + case offsetof(struct tcp_sock, ecn_flags): + end = offsetofend(struct tcp_sock, ecn_flags); + break; + default: + bpf_log(log, "no write support to tcp_sock at off %d\n", off); + return -EACCES; + } + + if (off + size > end) { + bpf_log(log, + "write access at off %d with size %d beyond the member of tcp_sock ended at %zu\n", + off, size, end); + return -EACCES; + } + + return NOT_INIT; +} + +static const struct bpf_func_proto * +bpf_tcp_ca_get_func_proto(enum bpf_func_id func_id, + const struct bpf_prog *prog) +{ + return bpf_base_func_proto(func_id); +} + +static const struct bpf_verifier_ops bpf_tcp_ca_verifier_ops = { + .get_func_proto = bpf_tcp_ca_get_func_proto, + .is_valid_access = bpf_tcp_ca_is_valid_access, + .btf_struct_access = bpf_tcp_ca_btf_struct_access, +}; + +static int bpf_tcp_ca_init_member(const struct btf_type *t, + const struct btf_member *member, + void *kdata, const void *udata) +{ + const struct tcp_congestion_ops *utcp_ca; + struct tcp_congestion_ops *tcp_ca; + size_t tcp_ca_name_len; + int prog_fd; + u32 moff; + + utcp_ca = (const struct tcp_congestion_ops *)udata; + tcp_ca = (struct tcp_congestion_ops *)kdata; + + moff = btf_member_bit_offset(t, member) / 8; + switch (moff) { + case offsetof(struct tcp_congestion_ops, flags): + if (utcp_ca->flags & ~TCP_CONG_MASK) + return -EINVAL; + tcp_ca->flags = utcp_ca->flags; + return 1; + case offsetof(struct tcp_congestion_ops, name): + tcp_ca_name_len = strnlen(utcp_ca->name, sizeof(utcp_ca->name)); + if (!tcp_ca_name_len || + tcp_ca_name_len == sizeof(utcp_ca->name)) + return -EINVAL; + if (tcp_ca_find(utcp_ca->name)) + return -EEXIST; + memcpy(tcp_ca->name, utcp_ca->name, sizeof(tcp_ca->name)); + return 1; + } + + if (!btf_type_resolve_func_ptr(btf_vmlinux, member->type, NULL)) + return 0; + + /* Ensure bpf_prog is provided for compulsory func ptr */ + prog_fd = (int)(*(unsigned long *)(udata + moff)); + if (!prog_fd && !is_optional(moff) && !is_unsupported(moff)) + return -EINVAL; + + return 0; +} + +static int bpf_tcp_ca_check_member(const struct btf_type *t, + const struct btf_member *member) +{ + if (is_unsupported(btf_member_bit_offset(t, member) / 8)) + return -ENOTSUPP; + return 0; +} + +static int bpf_tcp_ca_reg(void *kdata) +{ + return tcp_register_congestion_control(kdata); +} + +static void bpf_tcp_ca_unreg(void *kdata) +{ + tcp_unregister_congestion_control(kdata); +} + +/* Avoid sparse warning. It is only used in bpf_struct_ops.c. */ +extern struct bpf_struct_ops bpf_tcp_congestion_ops; + +struct bpf_struct_ops bpf_tcp_congestion_ops = { + .verifier_ops = &bpf_tcp_ca_verifier_ops, + .reg = bpf_tcp_ca_reg, + .unreg = bpf_tcp_ca_unreg, + .check_member = bpf_tcp_ca_check_member, + .init_member = bpf_tcp_ca_init_member, + .init = bpf_tcp_ca_init, + .name = "tcp_congestion_ops", +}; diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c index 3737ec096650..3172e31987be 100644 --- a/net/ipv4/tcp_cong.c +++ b/net/ipv4/tcp_cong.c @@ -21,7 +21,7 @@ static DEFINE_SPINLOCK(tcp_cong_list_lock); static LIST_HEAD(tcp_cong_list); /* Simple linear search, don't expect many entries! */ -static struct tcp_congestion_ops *tcp_ca_find(const char *name) +struct tcp_congestion_ops *tcp_ca_find(const char *name) { struct tcp_congestion_ops *e; @@ -162,7 +162,7 @@ void tcp_assign_congestion_control(struct sock *sk) rcu_read_lock(); ca = rcu_dereference(net->ipv4.tcp_congestion_control); - if (unlikely(!try_module_get(ca->owner))) + if (unlikely(!bpf_try_module_get(ca, ca->owner))) ca = &tcp_reno; icsk->icsk_ca_ops = ca; rcu_read_unlock(); @@ -208,7 +208,7 @@ void tcp_cleanup_congestion_control(struct sock *sk) if (icsk->icsk_ca_ops->release) icsk->icsk_ca_ops->release(sk); - module_put(icsk->icsk_ca_ops->owner); + bpf_module_put(icsk->icsk_ca_ops, icsk->icsk_ca_ops->owner); } /* Used by sysctl to change default congestion control */ @@ -222,12 +222,12 @@ int tcp_set_default_congestion_control(struct net *net, const char *name) ca = tcp_ca_find_autoload(net, name); if (!ca) { ret = -ENOENT; - } else if (!try_module_get(ca->owner)) { + } else if (!bpf_try_module_get(ca, ca->owner)) { ret = -EBUSY; } else { prev = xchg(&net->ipv4.tcp_congestion_control, ca); if (prev) - module_put(prev->owner); + bpf_module_put(prev, prev->owner); ca->flags |= TCP_CONG_NON_RESTRICTED; ret = 0; @@ -366,19 +366,19 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, } else if (!load) { const struct tcp_congestion_ops *old_ca = icsk->icsk_ca_ops; - if (try_module_get(ca->owner)) { + if (bpf_try_module_get(ca, ca->owner)) { if (reinit) { tcp_reinit_congestion_control(sk, ca); } else { icsk->icsk_ca_ops = ca; - module_put(old_ca->owner); + bpf_module_put(old_ca, old_ca->owner); } } else { err = -EBUSY; } } else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) || cap_net_admin)) { err = -EPERM; - } else if (!try_module_get(ca->owner)) { + } else if (!bpf_try_module_get(ca, ca->owner)) { err = -EBUSY; } else { tcp_reinit_congestion_control(sk, ca); diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 4adac9c75343..317ccca548a2 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2678,7 +2678,8 @@ static void __net_exit tcp_sk_exit(struct net *net) int cpu; if (net->ipv4.tcp_congestion_control) - module_put(net->ipv4.tcp_congestion_control->owner); + bpf_module_put(net->ipv4.tcp_congestion_control, + net->ipv4.tcp_congestion_control->owner); for_each_possible_cpu(cpu) inet_ctl_sock_destroy(*per_cpu_ptr(net->ipv4.tcp_sk, cpu)); @@ -2785,7 +2786,8 @@ static int __net_init tcp_sk_init(struct net *net) /* Reno is always built in */ if (!net_eq(net, &init_net) && - try_module_get(init_net.ipv4.tcp_congestion_control->owner)) + bpf_try_module_get(init_net.ipv4.tcp_congestion_control, + init_net.ipv4.tcp_congestion_control->owner)) net->ipv4.tcp_congestion_control = init_net.ipv4.tcp_congestion_control; else net->ipv4.tcp_congestion_control = &tcp_reno; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index c802bc80c400..ad3b56d9fa71 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -414,7 +414,7 @@ void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst) rcu_read_lock(); ca = tcp_ca_find_key(ca_key); - if (likely(ca && try_module_get(ca->owner))) { + if (likely(ca && bpf_try_module_get(ca, ca->owner))) { icsk->icsk_ca_dst_locked = tcp_ca_dst_locked(dst); icsk->icsk_ca_ops = ca; ca_got_dst = true; @@ -425,7 +425,7 @@ void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst) /* If no valid choice made yet, assign current system default ca. */ if (!ca_got_dst && (!icsk->icsk_ca_setsockopt || - !try_module_get(icsk->icsk_ca_ops->owner))) + !bpf_try_module_get(icsk->icsk_ca_ops, icsk->icsk_ca_ops->owner))) tcp_assign_congestion_control(sk); tcp_set_ca_state(sk, TCP_CA_Open); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 58c92a7d671c..377cfab422df 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3368,8 +3368,8 @@ static void tcp_ca_dst_init(struct sock *sk, const struct dst_entry *dst) rcu_read_lock(); ca = tcp_ca_find_key(ca_key); - if (likely(ca && try_module_get(ca->owner))) { - module_put(icsk->icsk_ca_ops->owner); + if (likely(ca && bpf_try_module_get(ca, ca->owner))) { + bpf_module_put(icsk->icsk_ca_ops, icsk->icsk_ca_ops->owner); icsk->icsk_ca_dst_locked = tcp_ca_dst_locked(dst); icsk->icsk_ca_ops = ca; } -- cgit v1.2.3-71-gd317 From 206057fe020ac5c037d5e2dd6562a9bd216ec765 Mon Sep 17 00:00:00 2001 From: Martin KaFai Lau Date: Wed, 8 Jan 2020 16:45:51 -0800 Subject: bpf: Add BPF_FUNC_tcp_send_ack helper Add a helper to send out a tcp-ack. It will be used in the later bpf_dctcp implementation that requires to send out an ack when the CE state changed. Signed-off-by: Martin KaFai Lau Signed-off-by: Alexei Starovoitov Acked-by: Yonghong Song Link: https://lore.kernel.org/bpf/20200109004551.3900448-1-kafai@fb.com --- include/uapi/linux/bpf.h | 11 ++++++++++- net/ipv4/bpf_tcp_ca.c | 24 +++++++++++++++++++++++- 2 files changed, 33 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 38059880963e..2d6a2e572f56 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2837,6 +2837,14 @@ union bpf_attr { * Return * On success, the strictly positive length of the string, including * the trailing NUL character. On error, a negative value. + * + * int bpf_tcp_send_ack(void *tp, u32 rcv_nxt) + * Description + * Send out a tcp-ack. *tp* is the in-kernel struct tcp_sock. + * *rcv_nxt* is the ack_seq to be sent out. + * Return + * 0 on success, or a negative error in case of failure. + * */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -2954,7 +2962,8 @@ union bpf_attr { FN(probe_read_user), \ FN(probe_read_kernel), \ FN(probe_read_user_str), \ - FN(probe_read_kernel_str), + FN(probe_read_kernel_str), \ + FN(tcp_send_ack), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c index 9c7745b958d7..574972bc7299 100644 --- a/net/ipv4/bpf_tcp_ca.c +++ b/net/ipv4/bpf_tcp_ca.c @@ -143,11 +143,33 @@ static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, return NOT_INIT; } +BPF_CALL_2(bpf_tcp_send_ack, struct tcp_sock *, tp, u32, rcv_nxt) +{ + /* bpf_tcp_ca prog cannot have NULL tp */ + __tcp_send_ack((struct sock *)tp, rcv_nxt); + return 0; +} + +static const struct bpf_func_proto bpf_tcp_send_ack_proto = { + .func = bpf_tcp_send_ack, + .gpl_only = false, + /* In case we want to report error later */ + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_ANYTHING, + .btf_id = &tcp_sock_id, +}; + static const struct bpf_func_proto * bpf_tcp_ca_get_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { - return bpf_base_func_proto(func_id); + switch (func_id) { + case BPF_FUNC_tcp_send_ack: + return &bpf_tcp_send_ack_proto; + default: + return bpf_base_func_proto(func_id); + } } static const struct bpf_verifier_ops bpf_tcp_ca_verifier_ops = { -- cgit v1.2.3-71-gd317 From f5bfcd953d811dbb8913de36b96b38da6bb62135 Mon Sep 17 00:00:00 2001 From: Andrey Ignatov Date: Tue, 7 Jan 2020 17:40:06 -0800 Subject: bpf: Document BPF_F_QUERY_EFFECTIVE flag Document BPF_F_QUERY_EFFECTIVE flag, mostly to clarify how it affects attach_flags what may not be obvious and what may lead to confision. Specifically attach_flags is returned only for target_fd but if programs are inherited from an ancestor cgroup then returned attach_flags for current cgroup may be confusing. For example, two effective programs of same attach_type can be returned but w/o BPF_F_ALLOW_MULTI in attach_flags. Simple repro: # bpftool c s /sys/fs/cgroup/path/to/task ID AttachType AttachFlags Name # bpftool c s /sys/fs/cgroup/path/to/task effective ID AttachType AttachFlags Name 95043 ingress tw_ipt_ingress 95048 ingress tw_ingress Signed-off-by: Andrey Ignatov Signed-off-by: Alexei Starovoitov Acked-by: Song Liu Link: https://lore.kernel.org/bpf/20200108014006.938363-1-rdna@fb.com --- include/uapi/linux/bpf.h | 7 ++++++- tools/include/uapi/linux/bpf.h | 7 ++++++- 2 files changed, 12 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2d6a2e572f56..52966e758fe5 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -359,7 +359,12 @@ enum bpf_attach_type { /* Enable memory-mapping BPF map */ #define BPF_F_MMAPABLE (1U << 10) -/* flags for BPF_PROG_QUERY */ +/* Flags for BPF_PROG_QUERY. */ + +/* Query effective (directly attached + inherited from ancestor cgroups) + * programs that will be executed for events within a cgroup. + * attach_flags with this flag are returned only for directly attached programs. + */ #define BPF_F_QUERY_EFFECTIVE (1U << 0) enum bpf_stack_build_id_status { diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2d6a2e572f56..52966e758fe5 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -359,7 +359,12 @@ enum bpf_attach_type { /* Enable memory-mapping BPF map */ #define BPF_F_MMAPABLE (1U << 10) -/* flags for BPF_PROG_QUERY */ +/* Flags for BPF_PROG_QUERY. */ + +/* Query effective (directly attached + inherited from ancestor cgroups) + * programs that will be executed for events within a cgroup. + * attach_flags with this flag are returned only for directly attached programs. + */ #define BPF_F_QUERY_EFFECTIVE (1U << 0) enum bpf_stack_build_id_status { -- cgit v1.2.3-71-gd317 From e9cdced78dc20c1592c1fb98ed064943007a46c5 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:14 -0800 Subject: net: Make sock protocol value checks more specific SK_PROTOCOL_MAX is only used in two places, for DECNet and AX.25. The limits have more to do with the those protocol definitions than they do with the data type of sk_protocol, so remove SK_PROTOCOL_MAX and use U8_MAX directly. Reviewed-by: Eric Dumazet Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/sock.h | 1 - net/ax25/af_ax25.c | 2 +- net/decnet/af_decnet.c | 2 +- 3 files changed, 2 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/net/sock.h b/include/net/sock.h index 8dff68b4c316..091e55428415 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -458,7 +458,6 @@ struct sock { sk_userlocks : 4, sk_protocol : 8, sk_type : 16; -#define SK_PROTOCOL_MAX U8_MAX u16 sk_gso_max_segs; u8 sk_pacing_shift; unsigned long sk_lingertime; diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c index 324306d6fde0..ff57ea89c27e 100644 --- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -808,7 +808,7 @@ static int ax25_create(struct net *net, struct socket *sock, int protocol, struct sock *sk; ax25_cb *ax25; - if (protocol < 0 || protocol > SK_PROTOCOL_MAX) + if (protocol < 0 || protocol > U8_MAX) return -EINVAL; if (!net_eq(net, &init_net)) diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c index e19a92a62e14..0a46ea3bddd5 100644 --- a/net/decnet/af_decnet.c +++ b/net/decnet/af_decnet.c @@ -670,7 +670,7 @@ static int dn_create(struct net *net, struct socket *sock, int protocol, { struct sock *sk; - if (protocol < 0 || protocol > SK_PROTOCOL_MAX) + if (protocol < 0 || protocol > U8_MAX) return -EINVAL; if (!net_eq(net, &init_net)) -- cgit v1.2.3-71-gd317 From bf9765145b856fa2e238a5b8a54453795ba30ad6 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:15 -0800 Subject: sock: Make sk_protocol a 16-bit value Match the 16-bit width of skbuff->protocol. Fills an 8-bit hole so sizeof(struct sock) does not change. Also take care of BPF field access for sk_type/sk_protocol. Both of them are now outside the bitfield, so we can use load instructions without further shifting/masking. v5 -> v6: - update eBPF accessors, too (Intel's kbuild test robot) v2 -> v3: - keep 'sk_type' 2 bytes aligned (Eric) v1 -> v2: - preserve sk_pacing_shift as bit field (Eric) Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: bpf@vger.kernel.org Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/sock.h | 25 ++++--------------- include/trace/events/sock.h | 2 +- net/core/filter.c | 60 +++++++++++++++++---------------------------- 3 files changed, 28 insertions(+), 59 deletions(-) (limited to 'include') diff --git a/include/net/sock.h b/include/net/sock.h index 091e55428415..8766f9bc3e70 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -436,30 +436,15 @@ struct sock { * Because of non atomicity rules, all * changes are protected by socket lock. */ - unsigned int __sk_flags_offset[0]; -#ifdef __BIG_ENDIAN_BITFIELD -#define SK_FL_PROTO_SHIFT 16 -#define SK_FL_PROTO_MASK 0x00ff0000 - -#define SK_FL_TYPE_SHIFT 0 -#define SK_FL_TYPE_MASK 0x0000ffff -#else -#define SK_FL_PROTO_SHIFT 8 -#define SK_FL_PROTO_MASK 0x0000ff00 - -#define SK_FL_TYPE_SHIFT 16 -#define SK_FL_TYPE_MASK 0xffff0000 -#endif - - unsigned int sk_padding : 1, + u8 sk_padding : 1, sk_kern_sock : 1, sk_no_check_tx : 1, sk_no_check_rx : 1, - sk_userlocks : 4, - sk_protocol : 8, - sk_type : 16; - u16 sk_gso_max_segs; + sk_userlocks : 4; u8 sk_pacing_shift; + u16 sk_type; + u16 sk_protocol; + u16 sk_gso_max_segs; unsigned long sk_lingertime; struct proto *sk_prot_creator; rwlock_t sk_callback_lock; diff --git a/include/trace/events/sock.h b/include/trace/events/sock.h index 51fe9f6719eb..3ff12b90048d 100644 --- a/include/trace/events/sock.h +++ b/include/trace/events/sock.h @@ -147,7 +147,7 @@ TRACE_EVENT(inet_sock_set_state, __field(__u16, sport) __field(__u16, dport) __field(__u16, family) - __field(__u8, protocol) + __field(__u16, protocol) __array(__u8, saddr, 4) __array(__u8, daddr, 4) __array(__u8, saddr_v6, 16) diff --git a/net/core/filter.c b/net/core/filter.c index 42fd17c48c5f..ef01c5599501 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -7607,21 +7607,21 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type, break; case offsetof(struct bpf_sock, type): - BUILD_BUG_ON(HWEIGHT32(SK_FL_TYPE_MASK) != BITS_PER_BYTE * 2); - *insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg, - offsetof(struct sock, __sk_flags_offset)); - *insn++ = BPF_ALU32_IMM(BPF_AND, si->dst_reg, SK_FL_TYPE_MASK); - *insn++ = BPF_ALU32_IMM(BPF_RSH, si->dst_reg, SK_FL_TYPE_SHIFT); - *target_size = 2; + *insn++ = BPF_LDX_MEM( + BPF_FIELD_SIZEOF(struct sock, sk_type), + si->dst_reg, si->src_reg, + bpf_target_off(struct sock, sk_type, + sizeof_field(struct sock, sk_type), + target_size)); break; case offsetof(struct bpf_sock, protocol): - BUILD_BUG_ON(HWEIGHT32(SK_FL_PROTO_MASK) != BITS_PER_BYTE); - *insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg, - offsetof(struct sock, __sk_flags_offset)); - *insn++ = BPF_ALU32_IMM(BPF_AND, si->dst_reg, SK_FL_PROTO_MASK); - *insn++ = BPF_ALU32_IMM(BPF_RSH, si->dst_reg, SK_FL_PROTO_SHIFT); - *target_size = 1; + *insn++ = BPF_LDX_MEM( + BPF_FIELD_SIZEOF(struct sock, sk_protocol), + si->dst_reg, si->src_reg, + bpf_target_off(struct sock, sk_protocol, + sizeof_field(struct sock, sk_protocol), + target_size)); break; case offsetof(struct bpf_sock, src_ip4): @@ -7903,20 +7903,13 @@ static u32 sock_addr_convert_ctx_access(enum bpf_access_type type, break; case offsetof(struct bpf_sock_addr, type): - SOCK_ADDR_LOAD_NESTED_FIELD_SIZE_OFF( - struct bpf_sock_addr_kern, struct sock, sk, - __sk_flags_offset, BPF_W, 0); - *insn++ = BPF_ALU32_IMM(BPF_AND, si->dst_reg, SK_FL_TYPE_MASK); - *insn++ = BPF_ALU32_IMM(BPF_RSH, si->dst_reg, SK_FL_TYPE_SHIFT); + SOCK_ADDR_LOAD_NESTED_FIELD(struct bpf_sock_addr_kern, + struct sock, sk, sk_type); break; case offsetof(struct bpf_sock_addr, protocol): - SOCK_ADDR_LOAD_NESTED_FIELD_SIZE_OFF( - struct bpf_sock_addr_kern, struct sock, sk, - __sk_flags_offset, BPF_W, 0); - *insn++ = BPF_ALU32_IMM(BPF_AND, si->dst_reg, SK_FL_PROTO_MASK); - *insn++ = BPF_ALU32_IMM(BPF_RSH, si->dst_reg, - SK_FL_PROTO_SHIFT); + SOCK_ADDR_LOAD_NESTED_FIELD(struct bpf_sock_addr_kern, + struct sock, sk, sk_protocol); break; case offsetof(struct bpf_sock_addr, msg_src_ip4): @@ -8835,11 +8828,11 @@ sk_reuseport_is_valid_access(int off, int size, skb, \ SKB_FIELD) -#define SK_REUSEPORT_LOAD_SK_FIELD_SIZE_OFF(SK_FIELD, BPF_SIZE, EXTRA_OFF) \ - SOCK_ADDR_LOAD_NESTED_FIELD_SIZE_OFF(struct sk_reuseport_kern, \ - struct sock, \ - sk, \ - SK_FIELD, BPF_SIZE, EXTRA_OFF) +#define SK_REUSEPORT_LOAD_SK_FIELD(SK_FIELD) \ + SOCK_ADDR_LOAD_NESTED_FIELD(struct sk_reuseport_kern, \ + struct sock, \ + sk, \ + SK_FIELD) static u32 sk_reuseport_convert_ctx_access(enum bpf_access_type type, const struct bpf_insn *si, @@ -8863,16 +8856,7 @@ static u32 sk_reuseport_convert_ctx_access(enum bpf_access_type type, break; case offsetof(struct sk_reuseport_md, ip_protocol): - BUILD_BUG_ON(HWEIGHT32(SK_FL_PROTO_MASK) != BITS_PER_BYTE); - SK_REUSEPORT_LOAD_SK_FIELD_SIZE_OFF(__sk_flags_offset, - BPF_W, 0); - *insn++ = BPF_ALU32_IMM(BPF_AND, si->dst_reg, SK_FL_PROTO_MASK); - *insn++ = BPF_ALU32_IMM(BPF_RSH, si->dst_reg, - SK_FL_PROTO_SHIFT); - /* SK_FL_PROTO_MASK and SK_FL_PROTO_SHIFT are endian - * aware. No further narrowing or masking is needed. - */ - *target_size = 1; + SK_REUSEPORT_LOAD_SK_FIELD(sk_protocol); break; case offsetof(struct sk_reuseport_md, data_end): -- cgit v1.2.3-71-gd317 From faf391c3826cd29feae02078ca2022d2f912f7cc Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:16 -0800 Subject: tcp: Define IPPROTO_MPTCP To open a MPTCP socket with socket(AF_INET, SOCK_STREAM, IPPROTO_MPTCP), IPPROTO_MPTCP needs a value that differs from IPPROTO_TCP. The existing IPPROTO numbers mostly map directly to IANA-specified protocol numbers. MPTCP does not have a protocol number allocated because MPTCP packets use the TCP protocol number. Use private number not used OTA. Reviewed-by: Eric Dumazet Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/trace/events/sock.h | 3 ++- include/uapi/linux/in.h | 2 ++ tools/include/uapi/linux/in.h | 2 ++ 3 files changed, 6 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/trace/events/sock.h b/include/trace/events/sock.h index 3ff12b90048d..a966d4b5ab37 100644 --- a/include/trace/events/sock.h +++ b/include/trace/events/sock.h @@ -19,7 +19,8 @@ #define inet_protocol_names \ EM(IPPROTO_TCP) \ EM(IPPROTO_DCCP) \ - EMe(IPPROTO_SCTP) + EM(IPPROTO_SCTP) \ + EMe(IPPROTO_MPTCP) #define tcp_state_names \ EM(TCP_ESTABLISHED) \ diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h index e7ad9d350a28..1521073b6348 100644 --- a/include/uapi/linux/in.h +++ b/include/uapi/linux/in.h @@ -76,6 +76,8 @@ enum { #define IPPROTO_MPLS IPPROTO_MPLS IPPROTO_RAW = 255, /* Raw IP packets */ #define IPPROTO_RAW IPPROTO_RAW + IPPROTO_MPTCP = 262, /* Multipath TCP connection */ +#define IPPROTO_MPTCP IPPROTO_MPTCP IPPROTO_MAX }; #endif diff --git a/tools/include/uapi/linux/in.h b/tools/include/uapi/linux/in.h index e7ad9d350a28..1521073b6348 100644 --- a/tools/include/uapi/linux/in.h +++ b/tools/include/uapi/linux/in.h @@ -76,6 +76,8 @@ enum { #define IPPROTO_MPLS IPPROTO_MPLS IPPROTO_RAW = 255, /* Raw IP packets */ #define IPPROTO_RAW IPPROTO_RAW + IPPROTO_MPTCP = 262, /* Multipath TCP connection */ +#define IPPROTO_MPTCP IPPROTO_MPTCP IPPROTO_MAX }; #endif -- cgit v1.2.3-71-gd317 From c74a39c861aeaf6b789b7abdbb3256f54c9fb365 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:17 -0800 Subject: tcp: Add MPTCP option number TCP option 30 is allocated for MPTCP by the IANA. Reviewed-by: Eric Dumazet Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/tcp.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/net/tcp.h b/include/net/tcp.h index 7df37e2fddca..85f1d7ff6e8b 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -182,6 +182,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo); #define TCPOPT_SACK 5 /* SACK Block */ #define TCPOPT_TIMESTAMP 8 /* Better RTT estimations/PAWS */ #define TCPOPT_MD5SIG 19 /* MD5 Signature (RFC2385) */ +#define TCPOPT_MPTCP 30 /* Multipath TCP (RFC6824) */ #define TCPOPT_FASTOPEN 34 /* Fast open (RFC7413) */ #define TCPOPT_EXP 254 /* Experimental */ /* Magic number to be after the option value for sharing TCP -- cgit v1.2.3-71-gd317 From 1323059301c8f36d933876233516245d882346a6 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:18 -0800 Subject: tcp, ulp: Add clone operation to tcp_ulp_ops If ULP is used on a listening socket, icsk_ulp_ops and icsk_ulp_data are copied when the listener is cloned. Sometimes the clone is immediately deleted, which will invoke the release op on the clone and likely corrupt the listening socket's icsk_ulp_data. The clone operation is invoked immediately after the clone is copied and gives the ULP type an opportunity to set up the clone socket and its icsk_ulp_data. The MPTCP ULP clone will silently fallback to plain TCP on allocation failure, so 'clone()' does not need to return an error code. v6 -> v7: - move and rename ulp clone helper to make it inline-friendly v5 -> v6: - clarified MPTCP clone usage in commit message Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/tcp.h | 3 +++ net/ipv4/inet_connection_sock.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+) (limited to 'include') diff --git a/include/net/tcp.h b/include/net/tcp.h index 85f1d7ff6e8b..ac52633e7061 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -2154,6 +2154,9 @@ struct tcp_ulp_ops { /* diagnostic */ int (*get_info)(const struct sock *sk, struct sk_buff *skb); size_t (*get_info_size)(const struct sock *sk); + /* clone ulp */ + void (*clone)(const struct request_sock *req, struct sock *newsk, + const gfp_t priority); char name[TCP_ULP_NAME_MAX]; struct module *owner; diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 18c0d5bffe12..6a691fd04398 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -770,6 +770,18 @@ void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, } EXPORT_SYMBOL_GPL(inet_csk_reqsk_queue_hash_add); +static void inet_clone_ulp(const struct request_sock *req, struct sock *newsk, + const gfp_t priority) +{ + struct inet_connection_sock *icsk = inet_csk(newsk); + + if (!icsk->icsk_ulp_ops) + return; + + if (icsk->icsk_ulp_ops->clone) + icsk->icsk_ulp_ops->clone(req, newsk, priority); +} + /** * inet_csk_clone_lock - clone an inet socket, and lock its clone * @sk: the socket to clone @@ -810,6 +822,8 @@ struct sock *inet_csk_clone_lock(const struct sock *sk, /* Deinitialize accept_queue to trap illegal accesses. */ memset(&newicsk->icsk_accept_queue, 0, sizeof(newicsk->icsk_accept_queue)); + inet_clone_ulp(req, newsk, priority); + security_inet_csk_clone(newsk, req); } return newsk; -- cgit v1.2.3-71-gd317 From 3ee17bc78e0f3fdeff9890993e8f3a9f5145163b Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:19 -0800 Subject: mptcp: Add MPTCP to skb extensions Add enum value for MPTCP and update config dependencies v5 -> v6: - fixed '__unused' field size Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- MAINTAINERS | 10 ++++++++++ include/linux/skbuff.h | 3 +++ include/net/mptcp.h | 28 ++++++++++++++++++++++++++++ net/core/skbuff.c | 7 +++++++ 4 files changed, 48 insertions(+) create mode 100644 include/net/mptcp.h (limited to 'include') diff --git a/MAINTAINERS b/MAINTAINERS index 218089f38e11..9dcb9cab5705 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11573,6 +11573,16 @@ F: net/ipv6/calipso.c F: net/netfilter/xt_CONNSECMARK.c F: net/netfilter/xt_SECMARK.c +NETWORKING [MPTCP] +M: Mat Martineau +M: Matthieu Baerts +L: netdev@vger.kernel.org +L: mptcp@lists.01.org +W: https://github.com/multipath-tcp/mptcp_net-next/wiki +B: https://github.com/multipath-tcp/mptcp_net-next/issues +S: Maintained +F: include/net/mptcp.h + NETWORKING [TCP] M: Eric Dumazet L: netdev@vger.kernel.org diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 64e5b1be9ff5..f5c27600b410 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -4096,6 +4096,9 @@ enum skb_ext_id { #endif #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) TC_SKB_EXT, +#endif +#if IS_ENABLED(CONFIG_MPTCP) + SKB_EXT_MPTCP, #endif SKB_EXT_NUM, /* must be last */ }; diff --git a/include/net/mptcp.h b/include/net/mptcp.h new file mode 100644 index 000000000000..326043c29c0a --- /dev/null +++ b/include/net/mptcp.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Multipath TCP + * + * Copyright (c) 2017 - 2019, Intel Corporation. + */ + +#ifndef __NET_MPTCP_H +#define __NET_MPTCP_H + +#include + +/* MPTCP sk_buff extension data */ +struct mptcp_ext { + u64 data_ack; + u64 data_seq; + u32 subflow_seq; + u16 data_len; + u8 use_map:1, + dsn64:1, + data_fin:1, + use_ack:1, + ack64:1, + __unused:3; + /* one byte hole */ +}; + +#endif /* __NET_MPTCP_H */ diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 44b0894d8ae1..a4106da23c34 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -68,6 +68,7 @@ #include #include #include +#include #include #include @@ -4109,6 +4110,9 @@ static const u8 skb_ext_type_len[] = { #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) [TC_SKB_EXT] = SKB_EXT_CHUNKSIZEOF(struct tc_skb_ext), #endif +#if IS_ENABLED(CONFIG_MPTCP) + [SKB_EXT_MPTCP] = SKB_EXT_CHUNKSIZEOF(struct mptcp_ext), +#endif }; static __always_inline unsigned int skb_ext_total_length(void) @@ -4122,6 +4126,9 @@ static __always_inline unsigned int skb_ext_total_length(void) #endif #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) skb_ext_type_len[TC_SKB_EXT] + +#endif +#if IS_ENABLED(CONFIG_MPTCP) + skb_ext_type_len[SKB_EXT_MPTCP] + #endif 0; } -- cgit v1.2.3-71-gd317 From 85712484110df308215077be6ee21c4e57d7dec2 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:20 -0800 Subject: tcp: coalesce/collapse must respect MPTCP extensions Coalesce and collapse of packets carrying MPTCP extensions is allowed when the newer packet has no extension or the extensions carried by both packets are equal. This allows merging of TSO packet trains and even cross-TSO packets, and does not require any additional action when moving data into existing SKBs. v3 -> v4: - allow collapsing, under mptcp_skb_can_collapse() constraint v5 -> v6: - clarify MPTCP skb extensions must always be cleared at allocation time Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/mptcp.h | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++ include/net/tcp.h | 8 ++++++++ net/ipv4/tcp_input.c | 11 +++++++--- net/ipv4/tcp_output.c | 2 +- 4 files changed, 74 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 326043c29c0a..0573ae75c3db 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -8,6 +8,7 @@ #ifndef __NET_MPTCP_H #define __NET_MPTCP_H +#include #include /* MPTCP sk_buff extension data */ @@ -25,4 +26,60 @@ struct mptcp_ext { /* one byte hole */ }; +#ifdef CONFIG_MPTCP + +/* move the skb extension owership, with the assumption that 'to' is + * newly allocated + */ +static inline void mptcp_skb_ext_move(struct sk_buff *to, + struct sk_buff *from) +{ + if (!skb_ext_exist(from, SKB_EXT_MPTCP)) + return; + + if (WARN_ON_ONCE(to->active_extensions)) + skb_ext_put(to); + + to->active_extensions = from->active_extensions; + to->extensions = from->extensions; + from->active_extensions = 0; +} + +static inline bool mptcp_ext_matches(const struct mptcp_ext *to_ext, + const struct mptcp_ext *from_ext) +{ + /* MPTCP always clears the ext when adding it to the skb, so + * holes do not bother us here + */ + return !from_ext || + (to_ext && from_ext && + !memcmp(from_ext, to_ext, sizeof(struct mptcp_ext))); +} + +/* check if skbs can be collapsed. + * MPTCP collapse is allowed if neither @to or @from carry an mptcp data + * mapping, or if the extension of @to is the same as @from. + * Collapsing is not possible if @to lacks an extension, but @from carries one. + */ +static inline bool mptcp_skb_can_collapse(const struct sk_buff *to, + const struct sk_buff *from) +{ + return mptcp_ext_matches(skb_ext_find(to, SKB_EXT_MPTCP), + skb_ext_find(from, SKB_EXT_MPTCP)); +} + +#else + +static inline void mptcp_skb_ext_move(struct sk_buff *to, + const struct sk_buff *from) +{ +} + +static inline bool mptcp_skb_can_collapse(const struct sk_buff *to, + const struct sk_buff *from) +{ + return true; +} + +#endif /* CONFIG_MPTCP */ #endif /* __NET_MPTCP_H */ diff --git a/include/net/tcp.h b/include/net/tcp.h index ac52633e7061..13bc83fab454 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -39,6 +39,7 @@ #include #include #include +#include #include #include @@ -978,6 +979,13 @@ static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) return likely(!TCP_SKB_CB(skb)->eor); } +static inline bool tcp_skb_can_collapse(const struct sk_buff *to, + const struct sk_buff *from) +{ + return likely(tcp_skb_can_collapse_to(to) && + mptcp_skb_can_collapse(to, from)); +} + /* Events passed to congestion control interface */ enum tcp_ca_event { CA_EVENT_TX_START, /* first transmit when no packets in flight */ diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 062b696a5fa1..2914fdf1d543 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1422,7 +1422,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb, if ((TCP_SKB_CB(prev)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED) goto fallback; - if (!tcp_skb_can_collapse_to(prev)) + if (!tcp_skb_can_collapse(prev, skb)) goto fallback; in_sack = !after(start_seq, TCP_SKB_CB(skb)->seq) && @@ -4423,6 +4423,9 @@ static bool tcp_try_coalesce(struct sock *sk, if (TCP_SKB_CB(from)->seq != TCP_SKB_CB(to)->end_seq) return false; + if (!mptcp_skb_can_collapse(to, from)) + return false; + #ifdef CONFIG_TLS_DEVICE if (from->decrypted != to->decrypted) return false; @@ -4932,7 +4935,7 @@ restart: /* The first skb to collapse is: * - not SYN/FIN and * - bloated or contains data before "start" or - * overlaps to the next one. + * overlaps to the next one and mptcp allow collapsing. */ if (!(TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) && (tcp_win_from_space(sk, skb->truesize) > skb->len || @@ -4941,7 +4944,7 @@ restart: break; } - if (n && n != tail && + if (n && n != tail && mptcp_skb_can_collapse(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; @@ -4974,6 +4977,7 @@ restart: else __skb_queue_tail(&tmp, nskb); /* defer rbtree insertion */ skb_set_owner_r(nskb, sk); + mptcp_skb_ext_move(nskb, skb); /* Copy data, releasing collapsed skbs. */ while (copy > 0) { @@ -4993,6 +4997,7 @@ restart: skb = tcp_collapse_one(sk, skb, list, root); if (!skb || skb == tail || + !mptcp_skb_can_collapse(nskb, skb) || (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) goto end; #ifdef CONFIG_TLS_DEVICE diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 58c92a7d671c..3ce7fe1c4076 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2865,7 +2865,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *to, if (!tcp_can_collapse(sk, skb)) break; - if (!tcp_skb_can_collapse_to(to)) + if (!tcp_skb_can_collapse(to, skb)) break; space -= skb->len; -- cgit v1.2.3-71-gd317 From 35b2c32116091ef87a15c604cac363da8322a288 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Thu, 9 Jan 2020 07:59:21 -0800 Subject: tcp: Export TCP functions and ops struct MPTCP will make use of tcp_send_mss() and tcp_push() when sending data to specific TCP subflows. tcp_request_sock_ipvX_ops and ipvX_specific will be referenced during TCP subflow creation. Co-developed-by: Peter Krystad Signed-off-by: Peter Krystad Reviewed-by: Eric Dumazet Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/tcp.h | 8 ++++++++ net/ipv4/tcp.c | 6 +++--- net/ipv4/tcp_ipv4.c | 2 +- net/ipv6/tcp_ipv6.c | 6 +++--- 4 files changed, 15 insertions(+), 7 deletions(-) (limited to 'include') diff --git a/include/net/tcp.h b/include/net/tcp.h index 13bc83fab454..5e4133d09b9d 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -330,6 +330,9 @@ int tcp_sendpage_locked(struct sock *sk, struct page *page, int offset, size_t size, int flags); ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, size_t size, int flags); +int tcp_send_mss(struct sock *sk, int *size_goal, int flags); +void tcp_push(struct sock *sk, int flags, int mss_now, int nonagle, + int size_goal); void tcp_release_cb(struct sock *sk); void tcp_wfree(struct sk_buff *skb); void tcp_write_timer_handler(struct sock *sk); @@ -2011,6 +2014,11 @@ struct tcp_request_sock_ops { enum tcp_synack_type synack_type); }; +extern const struct tcp_request_sock_ops tcp_request_sock_ipv4_ops; +#if IS_ENABLED(CONFIG_IPV6) +extern const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops; +#endif + #ifdef CONFIG_SYN_COOKIES static inline __u32 cookie_init_sequence(const struct tcp_request_sock_ops *ops, const struct sock *sk, struct sk_buff *skb, diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f09fbc85b108..6711a97de3ce 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -690,8 +690,8 @@ static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb, refcount_read(&sk->sk_wmem_alloc) > skb->truesize; } -static void tcp_push(struct sock *sk, int flags, int mss_now, - int nonagle, int size_goal) +void tcp_push(struct sock *sk, int flags, int mss_now, + int nonagle, int size_goal) { struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; @@ -925,7 +925,7 @@ static unsigned int tcp_xmit_size_goal(struct sock *sk, u32 mss_now, return max(size_goal, mss_now); } -static int tcp_send_mss(struct sock *sk, int *size_goal, int flags) +int tcp_send_mss(struct sock *sk, int *size_goal, int flags) { int mss_now; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 4adac9c75343..fedb537839ec 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1426,7 +1426,7 @@ struct request_sock_ops tcp_request_sock_ops __read_mostly = { .syn_ack_timeout = tcp_syn_ack_timeout, }; -static const struct tcp_request_sock_ops tcp_request_sock_ipv4_ops = { +const struct tcp_request_sock_ops tcp_request_sock_ipv4_ops = { .mss_clamp = TCP_MSS_DEFAULT, #ifdef CONFIG_TCP_MD5SIG .req_md5_lookup = tcp_v4_md5_lookup, diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 95e4e1e95db2..5b5260103b65 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -75,7 +75,7 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb); static const struct inet_connection_sock_af_ops ipv6_mapped; -static const struct inet_connection_sock_af_ops ipv6_specific; +const struct inet_connection_sock_af_ops ipv6_specific; #ifdef CONFIG_TCP_MD5SIG static const struct tcp_sock_af_ops tcp_sock_ipv6_specific; static const struct tcp_sock_af_ops tcp_sock_ipv6_mapped_specific; @@ -819,7 +819,7 @@ struct request_sock_ops tcp6_request_sock_ops __read_mostly = { .syn_ack_timeout = tcp_syn_ack_timeout, }; -static const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = { +const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = { .mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr), #ifdef CONFIG_TCP_MD5SIG @@ -1794,7 +1794,7 @@ static struct timewait_sock_ops tcp6_timewait_sock_ops = { .twsk_destructor = tcp_twsk_destructor, }; -static const struct inet_connection_sock_af_ops ipv6_specific = { +const struct inet_connection_sock_af_ops ipv6_specific = { .queue_xmit = inet6_csk_xmit, .send_check = tcp_v6_send_check, .rebuild_header = inet6_sk_rebuild_header, -- cgit v1.2.3-71-gd317 From e66b2f31a068dd67172008459678821a79e4ea24 Mon Sep 17 00:00:00 2001 From: Paolo Abeni Date: Thu, 9 Jan 2020 07:59:23 -0800 Subject: tcp: clean ext on tx recycle Otherwise we will find stray/unexpected/old extensions value on next iteration. On tcp_write_xmit() we can end-up splitting an already queued skb in two parts, via tso_fragment(). The newly created skb can be allocated via the tx cache and an upper layer will not be aware of it, so that upper layer cannot set the ext properly. Resetting the ext on recycle ensures that stale data is not propagated in to packet headers or elsewhere. An alternative would be add an additional hook in tso_fragment() or in sk_stream_alloc_skb() to init the ext for upper layers that need it. Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Signed-off-by: Paolo Abeni Reviewed-by: Eric Dumazet Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/sock.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/net/sock.h b/include/net/sock.h index 8766f9bc3e70..432ff73d20f3 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1464,6 +1464,7 @@ static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb) sk_mem_uncharge(sk, skb->truesize); if (static_branch_unlikely(&tcp_tx_skb_cache_key) && !sk->sk_tx_skb_cache && !skb_cloned(skb)) { + skb_ext_reset(skb); skb_zcopy_clear(skb, true); sk->sk_tx_skb_cache = skb; return; -- cgit v1.2.3-71-gd317 From 8b69a803814bb8b14155ea60df83f6d57527e69e Mon Sep 17 00:00:00 2001 From: Paolo Abeni Date: Thu, 9 Jan 2020 07:59:24 -0800 Subject: skb: add helpers to allocate ext independently from sk_buff Currently we can allocate the extension only after the skb, this change allows the user to do the opposite, will simplify allocation failure handling from MPTCP. Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/linux/skbuff.h | 3 +++ net/core/skbuff.c | 35 +++++++++++++++++++++++++++++++++-- 2 files changed, 36 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index f5c27600b410..016b3c4ab99a 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -4120,6 +4120,9 @@ struct skb_ext { char data[0] __aligned(8); }; +struct skb_ext *__skb_ext_alloc(void); +void *__skb_ext_set(struct sk_buff *skb, enum skb_ext_id id, + struct skb_ext *ext); void *skb_ext_add(struct sk_buff *skb, enum skb_ext_id id); void __skb_ext_del(struct sk_buff *skb, enum skb_ext_id id); void __skb_ext_put(struct skb_ext *ext); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index a4106da23c34..48a7029529c9 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5987,7 +5987,14 @@ static void *skb_ext_get_ptr(struct skb_ext *ext, enum skb_ext_id id) return (void *)ext + (ext->offset[id] * SKB_EXT_ALIGN_VALUE); } -static struct skb_ext *skb_ext_alloc(void) +/** + * __skb_ext_alloc - allocate a new skb extensions storage + * + * Returns the newly allocated pointer. The pointer can later attached to a + * skb via __skb_ext_set(). + * Note: caller must handle the skb_ext as an opaque data. + */ +struct skb_ext *__skb_ext_alloc(void) { struct skb_ext *new = kmem_cache_alloc(skbuff_ext_cache, GFP_ATOMIC); @@ -6027,6 +6034,30 @@ static struct skb_ext *skb_ext_maybe_cow(struct skb_ext *old, return new; } +/** + * __skb_ext_set - attach the specified extension storage to this skb + * @skb: buffer + * @id: extension id + * @ext: extension storage previously allocated via __skb_ext_alloc() + * + * Existing extensions, if any, are cleared. + * + * Returns the pointer to the extension. + */ +void *__skb_ext_set(struct sk_buff *skb, enum skb_ext_id id, + struct skb_ext *ext) +{ + unsigned int newlen, newoff = SKB_EXT_CHUNKSIZEOF(*ext); + + skb_ext_put(skb); + newlen = newoff + skb_ext_type_len[id]; + ext->chunks = newlen; + ext->offset[id] = newoff; + skb->extensions = ext; + skb->active_extensions = 1 << id; + return skb_ext_get_ptr(ext, id); +} + /** * skb_ext_add - allocate space for given extension, COW if needed * @skb: buffer @@ -6060,7 +6091,7 @@ void *skb_ext_add(struct sk_buff *skb, enum skb_ext_id id) } else { newoff = SKB_EXT_CHUNKSIZEOF(*new); - new = skb_ext_alloc(); + new = __skb_ext_alloc(); if (!new) return NULL; } -- cgit v1.2.3-71-gd317 From 51c39bb1d5d105a02e29aa7960f0a395086e6342 Mon Sep 17 00:00:00 2001 From: Alexei Starovoitov Date: Thu, 9 Jan 2020 22:41:20 -0800 Subject: bpf: Introduce function-by-function verification New llvm and old llvm with libbpf help produce BTF that distinguish global and static functions. Unlike arguments of static function the arguments of global functions cannot be removed or optimized away by llvm. The compiler has to use exactly the arguments specified in a function prototype. The argument type information allows the verifier validate each global function independently. For now only supported argument types are pointer to context and scalars. In the future pointers to structures, sizes, pointer to packet data can be supported as well. Consider the following example: static int f1(int ...) { ... } int f3(int b); int f2(int a) { f1(a) + f3(a); } int f3(int b) { ... } int main(...) { f1(...) + f2(...) + f3(...); } The verifier will start its safety checks from the first global function f2(). It will recursively descend into f1() because it's static. Then it will check that arguments match for the f3() invocation inside f2(). It will not descend into f3(). It will finish f2() that has to be successfully verified for all possible values of 'a'. Then it will proceed with f3(). That function also has to be safe for all possible values of 'b'. Then it will start subprog 0 (which is main() function). It will recursively descend into f1() and will skip full check of f2() and f3(), since they are global. The order of processing global functions doesn't affect safety, since all global functions must be proven safe based on their arguments only. Such function by function verification can drastically improve speed of the verification and reduce complexity. Note that the stack limit of 512 still applies to the call chain regardless whether functions were static or global. The nested level of 8 also still applies. The same recursion prevention checks are in place as well. The type information and static/global kind is preserved after the verification hence in the above example global function f2() and f3() can be replaced later by equivalent functions with the same types that are loaded and verified later without affecting safety of this main() program. Such replacement (re-linking) of global functions is a subject of future patches. Signed-off-by: Alexei Starovoitov Signed-off-by: Daniel Borkmann Acked-by: Song Liu Link: https://lore.kernel.org/bpf/20200110064124.1760511-3-ast@kernel.org --- include/linux/bpf.h | 7 +- include/linux/bpf_verifier.h | 10 +- include/uapi/linux/btf.h | 6 ++ kernel/bpf/btf.c | 175 ++++++++++++++++++++++++------ kernel/bpf/verifier.c | 252 ++++++++++++++++++++++++++++++++++--------- 5 files changed, 366 insertions(+), 84 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a7bfe8a388c6..aed2bc39d72b 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -566,6 +566,7 @@ static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, #endif struct bpf_func_info_aux { + u16 linkage; bool unreliable; }; @@ -1081,7 +1082,11 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m); -int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog); +struct bpf_reg_state; +int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, + struct bpf_reg_state *regs); +int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, + struct bpf_reg_state *reg); struct bpf_prog *bpf_prog_by_id(u32 id); diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 26e40de9ef55..5406e6e96585 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -304,11 +304,13 @@ struct bpf_insn_aux_data { u64 map_key_state; /* constant (32 bit) key tracking for maps */ int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int sanitize_stack_off; /* stack slot to be cleared */ - bool seen; /* this insn was processed by the verifier */ + u32 seen; /* this insn was processed by the verifier at env->pass_cnt */ bool zext_dst; /* this insn zero extends dst reg */ u8 alu_state; /* used in combination with alu_limit */ - bool prune_point; + + /* below fields are initialized once */ unsigned int orig_idx; /* original instruction index */ + bool prune_point; }; #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */ @@ -379,6 +381,7 @@ struct bpf_verifier_env { int *insn_stack; int cur_stack; } cfg; + u32 pass_cnt; /* number of times do_check() was called */ u32 subprog_cnt; /* number of instructions analyzed by the verifier */ u32 prev_insn_processed, insn_processed; @@ -428,4 +431,7 @@ bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off, void bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt); +int check_ctx_reg(struct bpf_verifier_env *env, + const struct bpf_reg_state *reg, int regno); + #endif /* _LINUX_BPF_VERIFIER_H */ diff --git a/include/uapi/linux/btf.h b/include/uapi/linux/btf.h index 1a2898c482ee..5a667107ad2c 100644 --- a/include/uapi/linux/btf.h +++ b/include/uapi/linux/btf.h @@ -146,6 +146,12 @@ enum { BTF_VAR_GLOBAL_EXTERN = 2, }; +enum btf_func_linkage { + BTF_FUNC_STATIC = 0, + BTF_FUNC_GLOBAL = 1, + BTF_FUNC_EXTERN = 2, +}; + /* BTF_KIND_VAR is followed by a single "struct btf_var" to describe * additional information related to the variable such as its linkage. */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 81d9cf75cacd..832b5d7fd892 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -2651,8 +2651,8 @@ static s32 btf_func_check_meta(struct btf_verifier_env *env, return -EINVAL; } - if (btf_type_vlen(t)) { - btf_verifier_log_type(env, t, "vlen != 0"); + if (btf_type_vlen(t) > BTF_FUNC_GLOBAL) { + btf_verifier_log_type(env, t, "Invalid func linkage"); return -EINVAL; } @@ -3506,7 +3506,8 @@ static u8 bpf_ctx_convert_map[] = { static const struct btf_member * btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf, - const struct btf_type *t, enum bpf_prog_type prog_type) + const struct btf_type *t, enum bpf_prog_type prog_type, + int arg) { const struct btf_type *conv_struct; const struct btf_type *ctx_struct; @@ -3527,12 +3528,13 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf, * is not supported yet. * BPF_PROG_TYPE_RAW_TRACEPOINT is fine. */ - bpf_log(log, "BPF program ctx type is not a struct\n"); + if (log->level & BPF_LOG_LEVEL) + bpf_log(log, "arg#%d type is not a struct\n", arg); return NULL; } tname = btf_name_by_offset(btf, t->name_off); if (!tname) { - bpf_log(log, "BPF program ctx struct doesn't have a name\n"); + bpf_log(log, "arg#%d struct doesn't have a name\n", arg); return NULL; } /* prog_type is valid bpf program type. No need for bounds check. */ @@ -3565,11 +3567,12 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, struct btf *btf, static int btf_translate_to_vmlinux(struct bpf_verifier_log *log, struct btf *btf, const struct btf_type *t, - enum bpf_prog_type prog_type) + enum bpf_prog_type prog_type, + int arg) { const struct btf_member *prog_ctx_type, *kern_ctx_type; - prog_ctx_type = btf_get_prog_ctx_type(log, btf, t, prog_type); + prog_ctx_type = btf_get_prog_ctx_type(log, btf, t, prog_type, arg); if (!prog_ctx_type) return -ENOENT; kern_ctx_type = prog_ctx_type + 1; @@ -3731,7 +3734,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, info->reg_type = PTR_TO_BTF_ID; if (tgt_prog) { - ret = btf_translate_to_vmlinux(log, btf, t, tgt_prog->type); + ret = btf_translate_to_vmlinux(log, btf, t, tgt_prog->type, arg); if (ret > 0) { info->btf_id = ret; return true; @@ -4112,11 +4115,16 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, return 0; } -int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog) +/* Compare BTF of a function with given bpf_reg_state. + * Returns: + * EFAULT - there is a verifier bug. Abort verification. + * EINVAL - there is a type mismatch or BTF is not available. + * 0 - BTF matches with what bpf_reg_state expects. + * Only PTR_TO_CTX and SCALAR_VALUE states are recognized. + */ +int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, + struct bpf_reg_state *reg) { - struct bpf_verifier_state *st = env->cur_state; - struct bpf_func_state *func = st->frame[st->curframe]; - struct bpf_reg_state *reg = func->regs; struct bpf_verifier_log *log = &env->log; struct bpf_prog *prog = env->prog; struct btf *btf = prog->aux->btf; @@ -4126,27 +4134,30 @@ int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog) const char *tname; if (!prog->aux->func_info) - return 0; + return -EINVAL; btf_id = prog->aux->func_info[subprog].type_id; if (!btf_id) - return 0; + return -EFAULT; if (prog->aux->func_info_aux[subprog].unreliable) - return 0; + return -EINVAL; t = btf_type_by_id(btf, btf_id); if (!t || !btf_type_is_func(t)) { - bpf_log(log, "BTF of subprog %d doesn't point to KIND_FUNC\n", + /* These checks were already done by the verifier while loading + * struct bpf_func_info + */ + bpf_log(log, "BTF of func#%d doesn't point to KIND_FUNC\n", subprog); - return -EINVAL; + return -EFAULT; } tname = btf_name_by_offset(btf, t->name_off); t = btf_type_by_id(btf, t->type); if (!t || !btf_type_is_func_proto(t)) { - bpf_log(log, "Invalid type of func %s\n", tname); - return -EINVAL; + bpf_log(log, "Invalid BTF of func %s\n", tname); + return -EFAULT; } args = (const struct btf_param *)(t + 1); nargs = btf_type_vlen(t); @@ -4172,25 +4183,127 @@ int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog) bpf_log(log, "R%d is not a pointer\n", i + 1); goto out; } - /* If program is passing PTR_TO_CTX into subprogram - * check that BTF type matches. + /* If function expects ctx type in BTF check that caller + * is passing PTR_TO_CTX. */ - if (reg[i + 1].type == PTR_TO_CTX && - !btf_get_prog_ctx_type(log, btf, t, prog->type)) - goto out; - /* All other pointers are ok */ - continue; + if (btf_get_prog_ctx_type(log, btf, t, prog->type, i)) { + if (reg[i + 1].type != PTR_TO_CTX) { + bpf_log(log, + "arg#%d expected pointer to ctx, but got %s\n", + i, btf_kind_str[BTF_INFO_KIND(t->info)]); + goto out; + } + if (check_ctx_reg(env, ®[i + 1], i + 1)) + goto out; + continue; + } } - bpf_log(log, "Unrecognized argument type %s\n", - btf_kind_str[BTF_INFO_KIND(t->info)]); + bpf_log(log, "Unrecognized arg#%d type %s\n", + i, btf_kind_str[BTF_INFO_KIND(t->info)]); goto out; } return 0; out: - /* LLVM optimizations can remove arguments from static functions. */ - bpf_log(log, - "Type info disagrees with actual arguments due to compiler optimizations\n"); + /* Compiler optimizations can remove arguments from static functions + * or mismatched type can be passed into a global function. + * In such cases mark the function as unreliable from BTF point of view. + */ prog->aux->func_info_aux[subprog].unreliable = true; + return -EINVAL; +} + +/* Convert BTF of a function into bpf_reg_state if possible + * Returns: + * EFAULT - there is a verifier bug. Abort verification. + * EINVAL - cannot convert BTF. + * 0 - Successfully converted BTF into bpf_reg_state + * (either PTR_TO_CTX or SCALAR_VALUE). + */ +int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, + struct bpf_reg_state *reg) +{ + struct bpf_verifier_log *log = &env->log; + struct bpf_prog *prog = env->prog; + struct btf *btf = prog->aux->btf; + const struct btf_param *args; + const struct btf_type *t; + u32 i, nargs, btf_id; + const char *tname; + + if (!prog->aux->func_info || + prog->aux->func_info_aux[subprog].linkage != BTF_FUNC_GLOBAL) { + bpf_log(log, "Verifier bug\n"); + return -EFAULT; + } + + btf_id = prog->aux->func_info[subprog].type_id; + if (!btf_id) { + bpf_log(log, "Global functions need valid BTF\n"); + return -EFAULT; + } + + t = btf_type_by_id(btf, btf_id); + if (!t || !btf_type_is_func(t)) { + /* These checks were already done by the verifier while loading + * struct bpf_func_info + */ + bpf_log(log, "BTF of func#%d doesn't point to KIND_FUNC\n", + subprog); + return -EFAULT; + } + tname = btf_name_by_offset(btf, t->name_off); + + if (log->level & BPF_LOG_LEVEL) + bpf_log(log, "Validating %s() func#%d...\n", + tname, subprog); + + if (prog->aux->func_info_aux[subprog].unreliable) { + bpf_log(log, "Verifier bug in function %s()\n", tname); + return -EFAULT; + } + + t = btf_type_by_id(btf, t->type); + if (!t || !btf_type_is_func_proto(t)) { + bpf_log(log, "Invalid type of function %s()\n", tname); + return -EFAULT; + } + args = (const struct btf_param *)(t + 1); + nargs = btf_type_vlen(t); + if (nargs > 5) { + bpf_log(log, "Global function %s() with %d > 5 args. Buggy compiler.\n", + tname, nargs); + return -EINVAL; + } + /* check that function returns int */ + t = btf_type_by_id(btf, t->type); + while (btf_type_is_modifier(t)) + t = btf_type_by_id(btf, t->type); + if (!btf_type_is_int(t) && !btf_type_is_enum(t)) { + bpf_log(log, + "Global function %s() doesn't return scalar. Only those are supported.\n", + tname); + return -EINVAL; + } + /* Convert BTF function arguments into verifier types. + * Only PTR_TO_CTX and SCALAR are supported atm. + */ + for (i = 0; i < nargs; i++) { + t = btf_type_by_id(btf, args[i].type); + while (btf_type_is_modifier(t)) + t = btf_type_by_id(btf, t->type); + if (btf_type_is_int(t) || btf_type_is_enum(t)) { + reg[i + 1].type = SCALAR_VALUE; + continue; + } + if (btf_type_is_ptr(t) && + btf_get_prog_ctx_type(log, btf, t, prog->type, i)) { + reg[i + 1].type = PTR_TO_CTX; + continue; + } + bpf_log(log, "Arg#%d type %s in %s() is not supported yet.\n", + i, btf_kind_str[BTF_INFO_KIND(t->info)], tname); + return -EINVAL; + } return 0; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f5af759a8a5f..ca17dccc17ba 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1122,10 +1122,6 @@ static void init_reg_state(struct bpf_verifier_env *env, regs[BPF_REG_FP].type = PTR_TO_STACK; mark_reg_known_zero(env, regs, BPF_REG_FP); regs[BPF_REG_FP].frameno = state->frameno; - - /* 1st arg to a function */ - regs[BPF_REG_1].type = PTR_TO_CTX; - mark_reg_known_zero(env, regs, BPF_REG_1); } #define BPF_MAIN_FUNC (-1) @@ -2739,8 +2735,8 @@ static int get_callee_stack_depth(struct bpf_verifier_env *env, } #endif -static int check_ctx_reg(struct bpf_verifier_env *env, - const struct bpf_reg_state *reg, int regno) +int check_ctx_reg(struct bpf_verifier_env *env, + const struct bpf_reg_state *reg, int regno) { /* Access to ctx or passing it to a helper is only allowed in * its original, unmodified form. @@ -3956,12 +3952,26 @@ static int release_reference(struct bpf_verifier_env *env, return 0; } +static void clear_caller_saved_regs(struct bpf_verifier_env *env, + struct bpf_reg_state *regs) +{ + int i; + + /* after the call registers r0 - r5 were scratched */ + for (i = 0; i < CALLER_SAVED_REGS; i++) { + mark_reg_not_init(env, regs, caller_saved[i]); + check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); + } +} + static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx) { struct bpf_verifier_state *state = env->cur_state; + struct bpf_func_info_aux *func_info_aux; struct bpf_func_state *caller, *callee; int i, err, subprog, target_insn; + bool is_global = false; if (state->curframe + 1 >= MAX_CALL_FRAMES) { verbose(env, "the call stack of %d frames is too deep\n", @@ -3984,6 +3994,32 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EFAULT; } + func_info_aux = env->prog->aux->func_info_aux; + if (func_info_aux) + is_global = func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; + err = btf_check_func_arg_match(env, subprog, caller->regs); + if (err == -EFAULT) + return err; + if (is_global) { + if (err) { + verbose(env, "Caller passes invalid args into func#%d\n", + subprog); + return err; + } else { + if (env->log.level & BPF_LOG_LEVEL) + verbose(env, + "Func#%d is global and valid. Skipping.\n", + subprog); + clear_caller_saved_regs(env, caller->regs); + + /* All global functions return SCALAR_VALUE */ + mark_reg_unknown(env, caller->regs, BPF_REG_0); + + /* continue with next insn after call */ + return 0; + } + } + callee = kzalloc(sizeof(*callee), GFP_KERNEL); if (!callee) return -ENOMEM; @@ -4010,18 +4046,11 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, for (i = BPF_REG_1; i <= BPF_REG_5; i++) callee->regs[i] = caller->regs[i]; - /* after the call registers r0 - r5 were scratched */ - for (i = 0; i < CALLER_SAVED_REGS; i++) { - mark_reg_not_init(env, caller->regs, caller_saved[i]); - check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); - } + clear_caller_saved_regs(env, caller->regs); /* only increment it after check_reg_arg() finished */ state->curframe++; - if (btf_check_func_arg_match(env, subprog)) - return -EINVAL; - /* and go analyze first insn of the callee */ *insn_idx = target_insn; @@ -6771,12 +6800,13 @@ static int check_btf_func(struct bpf_verifier_env *env, /* check type_id */ type = btf_type_by_id(btf, krecord[i].type_id); - if (!type || BTF_INFO_KIND(type->info) != BTF_KIND_FUNC) { + if (!type || !btf_type_is_func(type)) { verbose(env, "invalid type id %d in func info", krecord[i].type_id); ret = -EINVAL; goto err_free; } + info_aux[i].linkage = BTF_INFO_VLEN(type->info); prev_offset = krecord[i].insn_off; urecord += urec_size; } @@ -7756,35 +7786,13 @@ static bool reg_type_mismatch(enum bpf_reg_type src, enum bpf_reg_type prev) static int do_check(struct bpf_verifier_env *env) { - struct bpf_verifier_state *state; + struct bpf_verifier_state *state = env->cur_state; struct bpf_insn *insns = env->prog->insnsi; struct bpf_reg_state *regs; int insn_cnt = env->prog->len; bool do_print_state = false; int prev_insn_idx = -1; - env->prev_linfo = NULL; - - state = kzalloc(sizeof(struct bpf_verifier_state), GFP_KERNEL); - if (!state) - return -ENOMEM; - state->curframe = 0; - state->speculative = false; - state->branches = 1; - state->frame[0] = kzalloc(sizeof(struct bpf_func_state), GFP_KERNEL); - if (!state->frame[0]) { - kfree(state); - return -ENOMEM; - } - env->cur_state = state; - init_func_state(env, state->frame[0], - BPF_MAIN_FUNC /* callsite */, - 0 /* frameno */, - 0 /* subprogno, zero == main subprog */); - - if (btf_check_func_arg_match(env, 0)) - return -EINVAL; - for (;;) { struct bpf_insn *insn; u8 class; @@ -7862,7 +7870,7 @@ static int do_check(struct bpf_verifier_env *env) } regs = cur_regs(env); - env->insn_aux_data[env->insn_idx].seen = true; + env->insn_aux_data[env->insn_idx].seen = env->pass_cnt; prev_insn_idx = env->insn_idx; if (class == BPF_ALU || class == BPF_ALU64) { @@ -8082,7 +8090,7 @@ process_bpf_exit: return err; env->insn_idx++; - env->insn_aux_data[env->insn_idx].seen = true; + env->insn_aux_data[env->insn_idx].seen = env->pass_cnt; } else { verbose(env, "invalid BPF_LD mode\n"); return -EINVAL; @@ -8095,7 +8103,6 @@ process_bpf_exit: env->insn_idx++; } - env->prog->aux->stack_depth = env->subprog_info[0].stack_depth; return 0; } @@ -8372,7 +8379,7 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, memcpy(new_data + off + cnt - 1, old_data + off, sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1)); for (i = off; i < off + cnt - 1; i++) { - new_data[i].seen = true; + new_data[i].seen = env->pass_cnt; new_data[i].zext_dst = insn_has_def32(env, insn + i); } env->insn_aux_data = new_data; @@ -9484,6 +9491,7 @@ static void free_states(struct bpf_verifier_env *env) kfree(sl); sl = sln; } + env->free_list = NULL; if (!env->explored_states) return; @@ -9497,11 +9505,159 @@ static void free_states(struct bpf_verifier_env *env) kfree(sl); sl = sln; } + env->explored_states[i] = NULL; } +} - kvfree(env->explored_states); +/* The verifier is using insn_aux_data[] to store temporary data during + * verification and to store information for passes that run after the + * verification like dead code sanitization. do_check_common() for subprogram N + * may analyze many other subprograms. sanitize_insn_aux_data() clears all + * temporary data after do_check_common() finds that subprogram N cannot be + * verified independently. pass_cnt counts the number of times + * do_check_common() was run and insn->aux->seen tells the pass number + * insn_aux_data was touched. These variables are compared to clear temporary + * data from failed pass. For testing and experiments do_check_common() can be + * run multiple times even when prior attempt to verify is unsuccessful. + */ +static void sanitize_insn_aux_data(struct bpf_verifier_env *env) +{ + struct bpf_insn *insn = env->prog->insnsi; + struct bpf_insn_aux_data *aux; + int i, class; + + for (i = 0; i < env->prog->len; i++) { + class = BPF_CLASS(insn[i].code); + if (class != BPF_LDX && class != BPF_STX) + continue; + aux = &env->insn_aux_data[i]; + if (aux->seen != env->pass_cnt) + continue; + memset(aux, 0, offsetof(typeof(*aux), orig_idx)); + } } +static int do_check_common(struct bpf_verifier_env *env, int subprog) +{ + struct bpf_verifier_state *state; + struct bpf_reg_state *regs; + int ret, i; + + env->prev_linfo = NULL; + env->pass_cnt++; + + state = kzalloc(sizeof(struct bpf_verifier_state), GFP_KERNEL); + if (!state) + return -ENOMEM; + state->curframe = 0; + state->speculative = false; + state->branches = 1; + state->frame[0] = kzalloc(sizeof(struct bpf_func_state), GFP_KERNEL); + if (!state->frame[0]) { + kfree(state); + return -ENOMEM; + } + env->cur_state = state; + init_func_state(env, state->frame[0], + BPF_MAIN_FUNC /* callsite */, + 0 /* frameno */, + subprog); + + regs = state->frame[state->curframe]->regs; + if (subprog) { + ret = btf_prepare_func_args(env, subprog, regs); + if (ret) + goto out; + for (i = BPF_REG_1; i <= BPF_REG_5; i++) { + if (regs[i].type == PTR_TO_CTX) + mark_reg_known_zero(env, regs, i); + else if (regs[i].type == SCALAR_VALUE) + mark_reg_unknown(env, regs, i); + } + } else { + /* 1st arg to a function */ + regs[BPF_REG_1].type = PTR_TO_CTX; + mark_reg_known_zero(env, regs, BPF_REG_1); + ret = btf_check_func_arg_match(env, subprog, regs); + if (ret == -EFAULT) + /* unlikely verifier bug. abort. + * ret == 0 and ret < 0 are sadly acceptable for + * main() function due to backward compatibility. + * Like socket filter program may be written as: + * int bpf_prog(struct pt_regs *ctx) + * and never dereference that ctx in the program. + * 'struct pt_regs' is a type mismatch for socket + * filter that should be using 'struct __sk_buff'. + */ + goto out; + } + + ret = do_check(env); +out: + free_verifier_state(env->cur_state, true); + env->cur_state = NULL; + while (!pop_stack(env, NULL, NULL)); + free_states(env); + if (ret) + /* clean aux data in case subprog was rejected */ + sanitize_insn_aux_data(env); + return ret; +} + +/* Verify all global functions in a BPF program one by one based on their BTF. + * All global functions must pass verification. Otherwise the whole program is rejected. + * Consider: + * int bar(int); + * int foo(int f) + * { + * return bar(f); + * } + * int bar(int b) + * { + * ... + * } + * foo() will be verified first for R1=any_scalar_value. During verification it + * will be assumed that bar() already verified successfully and call to bar() + * from foo() will be checked for type match only. Later bar() will be verified + * independently to check that it's safe for R1=any_scalar_value. + */ +static int do_check_subprogs(struct bpf_verifier_env *env) +{ + struct bpf_prog_aux *aux = env->prog->aux; + int i, ret; + + if (!aux->func_info) + return 0; + + for (i = 1; i < env->subprog_cnt; i++) { + if (aux->func_info_aux[i].linkage != BTF_FUNC_GLOBAL) + continue; + env->insn_idx = env->subprog_info[i].start; + WARN_ON_ONCE(env->insn_idx == 0); + ret = do_check_common(env, i); + if (ret) { + return ret; + } else if (env->log.level & BPF_LOG_LEVEL) { + verbose(env, + "Func#%d is safe for any args that match its prototype\n", + i); + } + } + return 0; +} + +static int do_check_main(struct bpf_verifier_env *env) +{ + int ret; + + env->insn_idx = 0; + ret = do_check_common(env, 0); + if (!ret) + env->prog->aux->stack_depth = env->subprog_info[0].stack_depth; + return ret; +} + + static void print_verification_stats(struct bpf_verifier_env *env) { int i; @@ -9849,18 +10005,14 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret < 0) goto skip_full_check; - ret = do_check(env); - if (env->cur_state) { - free_verifier_state(env->cur_state, true); - env->cur_state = NULL; - } + ret = do_check_subprogs(env); + ret = ret ?: do_check_main(env); if (ret == 0 && bpf_prog_is_dev_bound(env->prog->aux)) ret = bpf_prog_offload_finalize(env); skip_full_check: - while (!pop_stack(env, NULL, NULL)); - free_states(env); + kvfree(env->explored_states); if (ret == 0) ret = check_max_stack_depth(env); -- cgit v1.2.3-71-gd317 From 90fbca5952436e7817910b33eb4464ddd77a8964 Mon Sep 17 00:00:00 2001 From: Yishai Hadas Date: Thu, 12 Dec 2019 13:09:24 +0200 Subject: net/mlx5: Add Virtio Emulation related device capabilities Add Virtio Emulation related fields to the device capabilities. It includes a general bit to indicate whether Virtio Emulation is supported and the capabilities structure itself. Signed-off-by: Yishai Hadas Reviewed-by: Shahaf Shuler Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) (limited to 'include') diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 5d54fccf87fc..c6abaf4f1c55 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -87,6 +87,7 @@ enum { enum { MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM = (1ULL << MLX5_OBJ_TYPE_SW_ICM), MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT = (1ULL << 11), + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q = (1ULL << 13), }; enum { @@ -953,6 +954,19 @@ struct mlx5_ifc_device_event_cap_bits { u8 user_unaffiliated_events[4][0x40]; }; +struct mlx5_ifc_device_virtio_emulation_cap_bits { + u8 reserved_at_0[0x20]; + + u8 reserved_at_20[0x13]; + u8 log_doorbell_stride[0x5]; + u8 reserved_at_38[0x3]; + u8 log_doorbell_bar_size[0x5]; + + u8 doorbell_bar_offset[0x40]; + + u8 reserved_at_80[0x780]; +}; + enum { MLX5_ATOMIC_CAPS_ATOMIC_SIZE_QP_1_BYTE = 0x0, MLX5_ATOMIC_CAPS_ATOMIC_SIZE_QP_2_BYTES = 0x2, @@ -2751,6 +2765,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_fpga_cap_bits fpga_cap; struct mlx5_ifc_tls_cap_bits tls_cap; struct mlx5_ifc_device_mem_cap_bits device_mem_cap; + struct mlx5_ifc_device_virtio_emulation_cap_bits virtio_emulation_cap; u8 reserved_at_0[0x8000]; }; -- cgit v1.2.3-71-gd317 From ca1992c62cadb6c8e1e1b47e197b550f3cd89b76 Mon Sep 17 00:00:00 2001 From: Yishai Hadas Date: Thu, 12 Dec 2019 13:09:25 +0200 Subject: net/mlx5: Expose vDPA emulation device capabilities Expose vDPA emulation device capabilities from the core layer. It includes reading the capabilities from the firmware and exposing helper functions to access the data. Signed-off-by: Yishai Hadas Reviewed-by: Shahaf Shuler Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/fw.c | 7 +++++++ include/linux/mlx5/device.h | 9 +++++++++ 2 files changed, 16 insertions(+) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c index a19790dee7b2..c375edfe528c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c @@ -245,6 +245,13 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev) return err; } + if (MLX5_CAP_GEN_64(dev, general_obj_types) & + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) { + err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION); + if (err) + return err; + } + return 0; } diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index cc1c230f10ee..1a1c53f0262d 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -1105,6 +1105,7 @@ enum mlx5_cap_type { MLX5_CAP_DEV_MEM, MLX5_CAP_RESERVED_16, MLX5_CAP_TLS, + MLX5_CAP_VDPA_EMULATION = 0x13, MLX5_CAP_DEV_EVENT = 0x14, /* NUM OF CAP Types */ MLX5_CAP_NUM @@ -1297,6 +1298,14 @@ enum mlx5_qcam_feature_groups { #define MLX5_CAP_DEV_EVENT(mdev, cap)\ MLX5_ADDR_OF(device_event_cap, (mdev)->caps.hca_cur[MLX5_CAP_DEV_EVENT], cap) +#define MLX5_CAP_DEV_VDPA_EMULATION(mdev, cap)\ + MLX5_GET(device_virtio_emulation_cap, \ + (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap) + +#define MLX5_CAP64_DEV_VDPA_EMULATION(mdev, cap)\ + MLX5_GET64(device_virtio_emulation_cap, \ + (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap) + enum { MLX5_CMD_STAT_OK = 0x0, MLX5_CMD_STAT_INT_ERR = 0x1, -- cgit v1.2.3-71-gd317 From 468672b24fbc1c018e192dcc90e887bc9a9b2595 Mon Sep 17 00:00:00 2001 From: Jacob Keller Date: Thu, 9 Jan 2020 14:46:09 -0800 Subject: devlink: add macro for "fw.psid" The "fw.psid" devlink info version is documented in devlink-info.rst, and used by one driver. However, there is no associated macro for this firmware version like there is for others. Add one now. Signed-off-by: Jacob Keller Signed-off-by: David S. Miller --- include/net/devlink.h | 2 ++ 1 file changed, 2 insertions(+) (limited to 'include') diff --git a/include/net/devlink.h b/include/net/devlink.h index 453f45cc1519..4e80d9acdb86 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -485,6 +485,8 @@ enum devlink_param_generic_id { #define DEVLINK_INFO_VERSION_GENERIC_FW_UNDI "fw.undi" /* NCSI support/handler version */ #define DEVLINK_INFO_VERSION_GENERIC_FW_NCSI "fw.ncsi" +/* FW parameter set id */ +#define DEVLINK_INFO_VERSION_GENERIC_FW_PSID "fw.psid" struct devlink_region; struct devlink_info_req; -- cgit v1.2.3-71-gd317 From f4bdd7103652fab5ac8b0ed75fa5cbc515b50b8b Mon Sep 17 00:00:00 2001 From: Jacob Keller Date: Thu, 9 Jan 2020 14:46:10 -0800 Subject: devlink: move devlink documentation to subfolder Combine the documentation for devlink into a subfolder, and provide an index.rst file that can be used to generally describe devlink. Signed-off-by: Jacob Keller Signed-off-by: David S. Miller --- Documentation/networking/devlink-health.txt | 86 ------- Documentation/networking/devlink-info-versions.rst | 64 ----- Documentation/networking/devlink-params-bnxt.txt | 18 -- Documentation/networking/devlink-params-mlx5.txt | 17 -- Documentation/networking/devlink-params-mlxsw.txt | 10 - .../networking/devlink-params-mv88e6xxx.txt | 7 - Documentation/networking/devlink-params-nfp.txt | 5 - .../networking/devlink-params-ti-cpsw-switch.txt | 10 - Documentation/networking/devlink-params.txt | 71 ------ .../networking/devlink-trap-netdevsim.rst | 20 -- Documentation/networking/devlink-trap.rst | 270 --------------------- .../networking/devlink/devlink-health.txt | 86 +++++++ .../networking/devlink/devlink-info-versions.rst | 64 +++++ .../networking/devlink/devlink-params-bnxt.txt | 18 ++ .../networking/devlink/devlink-params-mlx5.txt | 17 ++ .../networking/devlink/devlink-params-mlxsw.txt | 10 + .../devlink/devlink-params-mv88e6xxx.txt | 7 + .../networking/devlink/devlink-params-nfp.txt | 5 + .../devlink/devlink-params-ti-cpsw-switch.txt | 10 + .../networking/devlink/devlink-params.txt | 71 ++++++ .../networking/devlink/devlink-trap-netdevsim.rst | 20 ++ Documentation/networking/devlink/devlink-trap.rst | 270 +++++++++++++++++++++ Documentation/networking/devlink/index.rst | 14 ++ Documentation/networking/index.rst | 4 +- MAINTAINERS | 1 + drivers/net/netdevsim/dev.c | 2 +- include/net/devlink.h | 4 +- 27 files changed, 597 insertions(+), 584 deletions(-) delete mode 100644 Documentation/networking/devlink-health.txt delete mode 100644 Documentation/networking/devlink-info-versions.rst delete mode 100644 Documentation/networking/devlink-params-bnxt.txt delete mode 100644 Documentation/networking/devlink-params-mlx5.txt delete mode 100644 Documentation/networking/devlink-params-mlxsw.txt delete mode 100644 Documentation/networking/devlink-params-mv88e6xxx.txt delete mode 100644 Documentation/networking/devlink-params-nfp.txt delete mode 100644 Documentation/networking/devlink-params-ti-cpsw-switch.txt delete mode 100644 Documentation/networking/devlink-params.txt delete mode 100644 Documentation/networking/devlink-trap-netdevsim.rst delete mode 100644 Documentation/networking/devlink-trap.rst create mode 100644 Documentation/networking/devlink/devlink-health.txt create mode 100644 Documentation/networking/devlink/devlink-info-versions.rst create mode 100644 Documentation/networking/devlink/devlink-params-bnxt.txt create mode 100644 Documentation/networking/devlink/devlink-params-mlx5.txt create mode 100644 Documentation/networking/devlink/devlink-params-mlxsw.txt create mode 100644 Documentation/networking/devlink/devlink-params-mv88e6xxx.txt create mode 100644 Documentation/networking/devlink/devlink-params-nfp.txt create mode 100644 Documentation/networking/devlink/devlink-params-ti-cpsw-switch.txt create mode 100644 Documentation/networking/devlink/devlink-params.txt create mode 100644 Documentation/networking/devlink/devlink-trap-netdevsim.rst create mode 100644 Documentation/networking/devlink/devlink-trap.rst create mode 100644 Documentation/networking/devlink/index.rst (limited to 'include') diff --git a/Documentation/networking/devlink-health.txt b/Documentation/networking/devlink-health.txt deleted file mode 100644 index 1db3fbea0831..000000000000 --- a/Documentation/networking/devlink-health.txt +++ /dev/null @@ -1,86 +0,0 @@ -The health mechanism is targeted for Real Time Alerting, in order to know when -something bad had happened to a PCI device -- Provide alert debug information -- Self healing -- If problem needs vendor support, provide a way to gather all needed debugging - information. - -The main idea is to unify and centralize driver health reports in the -generic devlink instance and allow the user to set different -attributes of the health reporting and recovery procedures. - -The devlink health reporter: -Device driver creates a "health reporter" per each error/health type. -Error/Health type can be a known/generic (eg pci error, fw error, rx/tx error) -or unknown (driver specific). -For each registered health reporter a driver can issue error/health reports -asynchronously. All health reports handling is done by devlink. -Device driver can provide specific callbacks for each "health reporter", e.g. - - Recovery procedures - - Diagnostics and object dump procedures - - OOB initial parameters -Different parts of the driver can register different types of health reporters -with different handlers. - -Once an error is reported, devlink health will do the following actions: - * A log is being send to the kernel trace events buffer - * Health status and statistics are being updated for the reporter instance - * Object dump is being taken and saved at the reporter instance (as long as - there is no other dump which is already stored) - * Auto recovery attempt is being done. Depends on: - - Auto-recovery configuration - - Grace period vs. time passed since last recover - -The user interface: -User can access/change each reporter's parameters and driver specific callbacks -via devlink, e.g per error type (per health reporter) - - Configure reporter's generic parameters (like: disable/enable auto recovery) - - Invoke recovery procedure - - Run diagnostics - - Object dump - -The devlink health interface (via netlink): -DEVLINK_CMD_HEALTH_REPORTER_GET - Retrieves status and configuration info per DEV and reporter. -DEVLINK_CMD_HEALTH_REPORTER_SET - Allows reporter-related configuration setting. -DEVLINK_CMD_HEALTH_REPORTER_RECOVER - Triggers a reporter's recovery procedure. -DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE - Retrieves diagnostics data from a reporter on a device. -DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET - Retrieves the last stored dump. Devlink health - saves a single dump. If an dump is not already stored by the devlink - for this reporter, devlink generates a new dump. - dump output is defined by the reporter. -DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR - Clears the last saved dump file for the specified reporter. - - - netlink - +--------------------------+ - | | - | + | - | | | - +--------------------------+ - |request for ops - |(diagnose, - mlx5_core devlink |recover, - |dump) -+--------+ +--------------------------+ -| | | reporter| | -| | | +---------v----------+ | -| | ops execution | | | | -| <----------------------------------+ | | -| | | | | | -| | | + ^------------------+ | -| | | | request for ops | -| | | | (recover, dump) | -| | | | | -| | | +-+------------------+ | -| | health report | | health handler | | -| +-------------------------------> | | -| | | +--------------------+ | -| | health reporter create | | -| +----------------------------> | -+--------+ +--------------------------+ diff --git a/Documentation/networking/devlink-info-versions.rst b/Documentation/networking/devlink-info-versions.rst deleted file mode 100644 index 4914f581b1fd..000000000000 --- a/Documentation/networking/devlink-info-versions.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) - -===================== -Devlink info versions -===================== - -board.id -======== - -Unique identifier of the board design. - -board.rev -========= - -Board design revision. - -asic.id -======= - -ASIC design identifier. - -asic.rev -======== - -ASIC design revision. - -board.manufacture -================= - -An identifier of the company or the facility which produced the part. - -fw -== - -Overall firmware version, often representing the collection of -fw.mgmt, fw.app, etc. - -fw.mgmt -======= - -Control unit firmware version. This firmware is responsible for house -keeping tasks, PHY control etc. but not the packet-by-packet data path -operation. - -fw.app -====== - -Data path microcode controlling high-speed packet processing. - -fw.undi -======= - -UNDI software, may include the UEFI driver, firmware or both. - -fw.ncsi -======= - -Version of the software responsible for supporting/handling the -Network Controller Sideband Interface. - -fw.psid -======= - -Unique identifier of the firmware parameter set. diff --git a/Documentation/networking/devlink-params-bnxt.txt b/Documentation/networking/devlink-params-bnxt.txt deleted file mode 100644 index 481aa303d5b4..000000000000 --- a/Documentation/networking/devlink-params-bnxt.txt +++ /dev/null @@ -1,18 +0,0 @@ -enable_sriov [DEVICE, GENERIC] - Configuration mode: Permanent - -ignore_ari [DEVICE, GENERIC] - Configuration mode: Permanent - -msix_vec_per_pf_max [DEVICE, GENERIC] - Configuration mode: Permanent - -msix_vec_per_pf_min [DEVICE, GENERIC] - Configuration mode: Permanent - -gre_ver_check [DEVICE, DRIVER-SPECIFIC] - Generic Routing Encapsulation (GRE) version check will - be enabled in the device. If disabled, device skips - version checking for incoming packets. - Type: Boolean - Configuration mode: Permanent diff --git a/Documentation/networking/devlink-params-mlx5.txt b/Documentation/networking/devlink-params-mlx5.txt deleted file mode 100644 index 5071467118bd..000000000000 --- a/Documentation/networking/devlink-params-mlx5.txt +++ /dev/null @@ -1,17 +0,0 @@ -flow_steering_mode [DEVICE, DRIVER-SPECIFIC] - Controls the flow steering mode of the driver. - Two modes are supported: - 1. 'dmfs' - Device managed flow steering. - 2. 'smfs - Software/Driver managed flow steering. - In DMFS mode, the HW steering entities are created and - managed through the Firmware. - In SMFS mode, the HW steering entities are created and - managed though by the driver directly into Hardware - without firmware intervention. - Type: String - Configuration mode: runtime - -enable_roce [DEVICE, GENERIC] - Enable handling of RoCE traffic in the device. - Defaultly enabled. - Configuration mode: driverinit diff --git a/Documentation/networking/devlink-params-mlxsw.txt b/Documentation/networking/devlink-params-mlxsw.txt deleted file mode 100644 index c63ea9fc7009..000000000000 --- a/Documentation/networking/devlink-params-mlxsw.txt +++ /dev/null @@ -1,10 +0,0 @@ -fw_load_policy [DEVICE, GENERIC] - Configuration mode: driverinit - -acl_region_rehash_interval [DEVICE, DRIVER-SPECIFIC] - Sets an interval for periodic ACL region rehashes. - The value is in milliseconds, minimal value is "3000". - Value "0" disables the periodic work. - The first rehash will be run right after value is set. - Type: u32 - Configuration mode: runtime diff --git a/Documentation/networking/devlink-params-mv88e6xxx.txt b/Documentation/networking/devlink-params-mv88e6xxx.txt deleted file mode 100644 index 21c4b3556ef2..000000000000 --- a/Documentation/networking/devlink-params-mv88e6xxx.txt +++ /dev/null @@ -1,7 +0,0 @@ -ATU_hash [DEVICE, DRIVER-SPECIFIC] - Select one of four possible hashing algorithms for - MAC addresses in the Address Translation Unit. - A value of 3 seems to work better than the default of - 1 when many MAC addresses have the same OUI. - Configuration mode: runtime - Type: u8. 0-3 valid. diff --git a/Documentation/networking/devlink-params-nfp.txt b/Documentation/networking/devlink-params-nfp.txt deleted file mode 100644 index 43e4d4034865..000000000000 --- a/Documentation/networking/devlink-params-nfp.txt +++ /dev/null @@ -1,5 +0,0 @@ -fw_load_policy [DEVICE, GENERIC] - Configuration mode: permanent - -reset_dev_on_drv_probe [DEVICE, GENERIC] - Configuration mode: permanent diff --git a/Documentation/networking/devlink-params-ti-cpsw-switch.txt b/Documentation/networking/devlink-params-ti-cpsw-switch.txt deleted file mode 100644 index 4037458499f7..000000000000 --- a/Documentation/networking/devlink-params-ti-cpsw-switch.txt +++ /dev/null @@ -1,10 +0,0 @@ -ale_bypass [DEVICE, DRIVER-SPECIFIC] - Allows to enable ALE_CONTROL(4).BYPASS mode for debug purposes. - All packets will be sent to the Host port only if enabled. - Type: bool - Configuration mode: runtime - -switch_mode [DEVICE, DRIVER-SPECIFIC] - Enable switch mode - Type: bool - Configuration mode: runtime diff --git a/Documentation/networking/devlink-params.txt b/Documentation/networking/devlink-params.txt deleted file mode 100644 index 04e234e9acc9..000000000000 --- a/Documentation/networking/devlink-params.txt +++ /dev/null @@ -1,71 +0,0 @@ -Devlink configuration parameters -================================ -Following is the list of configuration parameters via devlink interface. -Each parameter can be generic or driver specific and are device level -parameters. - -Note that the driver-specific files should contain the generic params -they support to, with supported config modes. - -Each parameter can be set in different configuration modes: - runtime - set while driver is running, no reset required. - driverinit - applied while driver initializes, requires restart - driver by devlink reload command. - permanent - written to device's non-volatile memory, hard reset - required. - -Following is the list of parameters: -==================================== -enable_sriov [DEVICE, GENERIC] - Enable Single Root I/O Virtualisation (SRIOV) in - the device. - Type: Boolean - -ignore_ari [DEVICE, GENERIC] - Ignore Alternative Routing-ID Interpretation (ARI) - capability. If enabled, adapter will ignore ARI - capability even when platforms has the support - enabled and creates same number of partitions when - platform does not support ARI. - Type: Boolean - -msix_vec_per_pf_max [DEVICE, GENERIC] - Provides the maximum number of MSIX interrupts that - a device can create. Value is same across all - physical functions (PFs) in the device. - Type: u32 - -msix_vec_per_pf_min [DEVICE, GENERIC] - Provides the minimum number of MSIX interrupts required - for the device initialization. Value is same across all - physical functions (PFs) in the device. - Type: u32 - -fw_load_policy [DEVICE, GENERIC] - Controls the device's firmware loading policy. - Valid values: - * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER (0) - Load firmware version preferred by the driver. - * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH (1) - Load firmware currently stored in flash. - * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DISK (2) - Load firmware currently available on host's disk. - Type: u8 - -reset_dev_on_drv_probe [DEVICE, GENERIC] - Controls the device's reset policy on driver probe. - Valid values: - * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_UNKNOWN (0) - Unknown or invalid value. - * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_ALWAYS (1) - Always reset device on driver probe. - * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_NEVER (2) - Never reset device on driver probe. - * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_DISK (3) - Reset only if device firmware can be found in the - filesystem. - Type: u8 - -enable_roce [DEVICE, GENERIC] - Enable handling of RoCE traffic in the device. - Type: Boolean diff --git a/Documentation/networking/devlink-trap-netdevsim.rst b/Documentation/networking/devlink-trap-netdevsim.rst deleted file mode 100644 index b721c9415473..000000000000 --- a/Documentation/networking/devlink-trap-netdevsim.rst +++ /dev/null @@ -1,20 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0 - -====================== -Devlink Trap netdevsim -====================== - -Driver-specific Traps -===================== - -.. list-table:: List of Driver-specific Traps Registered by ``netdevsim`` - :widths: 5 5 90 - - * - Name - - Type - - Description - * - ``fid_miss`` - - ``exception`` - - When a packet enters the device it is classified to a filtering - indentifier (FID) based on the ingress port and VLAN. This trap is used - to trap packets for which a FID could not be found diff --git a/Documentation/networking/devlink-trap.rst b/Documentation/networking/devlink-trap.rst deleted file mode 100644 index 03311849bfb1..000000000000 --- a/Documentation/networking/devlink-trap.rst +++ /dev/null @@ -1,270 +0,0 @@ -.. SPDX-License-Identifier: GPL-2.0 - -============ -Devlink Trap -============ - -Background -========== - -Devices capable of offloading the kernel's datapath and perform functions such -as bridging and routing must also be able to send specific packets to the -kernel (i.e., the CPU) for processing. - -For example, a device acting as a multicast-aware bridge must be able to send -IGMP membership reports to the kernel for processing by the bridge module. -Without processing such packets, the bridge module could never populate its -MDB. - -As another example, consider a device acting as router which has received an IP -packet with a TTL of 1. Upon routing the packet the device must send it to the -kernel so that it will route it as well and generate an ICMP Time Exceeded -error datagram. Without letting the kernel route such packets itself, utilities -such as ``traceroute`` could never work. - -The fundamental ability of sending certain packets to the kernel for processing -is called "packet trapping". - -Overview -======== - -The ``devlink-trap`` mechanism allows capable device drivers to register their -supported packet traps with ``devlink`` and report trapped packets to -``devlink`` for further analysis. - -Upon receiving trapped packets, ``devlink`` will perform a per-trap packets and -bytes accounting and potentially report the packet to user space via a netlink -event along with all the provided metadata (e.g., trap reason, timestamp, input -port). This is especially useful for drop traps (see :ref:`Trap-Types`) -as it allows users to obtain further visibility into packet drops that would -otherwise be invisible. - -The following diagram provides a general overview of ``devlink-trap``:: - - Netlink event: Packet w/ metadata - Or a summary of recent drops - ^ - | - Userspace | - +---------------------------------------------------+ - Kernel | - | - +-------+--------+ - | | - | drop_monitor | - | | - +-------^--------+ - | - | - | - +----+----+ - | | Kernel's Rx path - | devlink | (non-drop traps) - | | - +----^----+ ^ - | | - +-----------+ - | - +-------+-------+ - | | - | Device driver | - | | - +-------^-------+ - Kernel | - +---------------------------------------------------+ - Hardware | - | Trapped packet - | - +--+---+ - | | - | ASIC | - | | - +------+ - -.. _Trap-Types: - -Trap Types -========== - -The ``devlink-trap`` mechanism supports the following packet trap types: - - * ``drop``: Trapped packets were dropped by the underlying device. Packets - are only processed by ``devlink`` and not injected to the kernel's Rx path. - The trap action (see :ref:`Trap-Actions`) can be changed. - * ``exception``: Trapped packets were not forwarded as intended by the - underlying device due to an exception (e.g., TTL error, missing neighbour - entry) and trapped to the control plane for resolution. Packets are - processed by ``devlink`` and injected to the kernel's Rx path. Changing the - action of such traps is not allowed, as it can easily break the control - plane. - -.. _Trap-Actions: - -Trap Actions -============ - -The ``devlink-trap`` mechanism supports the following packet trap actions: - - * ``trap``: The sole copy of the packet is sent to the CPU. - * ``drop``: The packet is dropped by the underlying device and a copy is not - sent to the CPU. - -Generic Packet Traps -==================== - -Generic packet traps are used to describe traps that trap well-defined packets -or packets that are trapped due to well-defined conditions (e.g., TTL error). -Such traps can be shared by multiple device drivers and their description must -be added to the following table: - -.. list-table:: List of Generic Packet Traps - :widths: 5 5 90 - - * - Name - - Type - - Description - * - ``source_mac_is_multicast`` - - ``drop`` - - Traps incoming packets that the device decided to drop because of a - multicast source MAC - * - ``vlan_tag_mismatch`` - - ``drop`` - - Traps incoming packets that the device decided to drop in case of VLAN - tag mismatch: The ingress bridge port is not configured with a PVID and - the packet is untagged or prio-tagged - * - ``ingress_vlan_filter`` - - ``drop`` - - Traps incoming packets that the device decided to drop in case they are - tagged with a VLAN that is not configured on the ingress bridge port - * - ``ingress_spanning_tree_filter`` - - ``drop`` - - Traps incoming packets that the device decided to drop in case the STP - state of the ingress bridge port is not "forwarding" - * - ``port_list_is_empty`` - - ``drop`` - - Traps packets that the device decided to drop in case they need to be - flooded (e.g., unknown unicast, unregistered multicast) and there are - no ports the packets should be flooded to - * - ``port_loopback_filter`` - - ``drop`` - - Traps packets that the device decided to drop in case after layer 2 - forwarding the only port from which they should be transmitted through - is the port from which they were received - * - ``blackhole_route`` - - ``drop`` - - Traps packets that the device decided to drop in case they hit a - blackhole route - * - ``ttl_value_is_too_small`` - - ``exception`` - - Traps unicast packets that should be forwarded by the device whose TTL - was decremented to 0 or less - * - ``tail_drop`` - - ``drop`` - - Traps packets that the device decided to drop because they could not be - enqueued to a transmission queue which is full - * - ``non_ip`` - - ``drop`` - - Traps packets that the device decided to drop because they need to - undergo a layer 3 lookup, but are not IP or MPLS packets - * - ``uc_dip_over_mc_dmac`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and they have a unicast destination IP and a multicast destination - MAC - * - ``dip_is_loopback_address`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and their destination IP is the loopback address (i.e., 127.0.0.0/8 - and ::1/128) - * - ``sip_is_mc`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and their source IP is multicast (i.e., 224.0.0.0/8 and ff::/8) - * - ``sip_is_loopback_address`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and their source IP is the loopback address (i.e., 127.0.0.0/8 and ::1/128) - * - ``ip_header_corrupted`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and their IP header is corrupted: wrong checksum, wrong IP version - or too short Internet Header Length (IHL) - * - ``ipv4_sip_is_limited_bc`` - - ``drop`` - - Traps packets that the device decided to drop because they need to be - routed and their source IP is limited broadcast (i.e., 255.255.255.255/32) - * - ``ipv6_mc_dip_reserved_scope`` - - ``drop`` - - Traps IPv6 packets that the device decided to drop because they need to - be routed and their IPv6 multicast destination IP has a reserved scope - (i.e., ffx0::/16) - * - ``ipv6_mc_dip_interface_local_scope`` - - ``drop`` - - Traps IPv6 packets that the device decided to drop because they need to - be routed and their IPv6 multicast destination IP has an interface-local scope - (i.e., ffx1::/16) - * - ``mtu_value_is_too_small`` - - ``exception`` - - Traps packets that should have been routed by the device, but were bigger - than the MTU of the egress interface - * - ``unresolved_neigh`` - - ``exception`` - - Traps packets that did not have a matching IP neighbour after routing - * - ``mc_reverse_path_forwarding`` - - ``exception`` - - Traps multicast IP packets that failed reverse-path forwarding (RPF) - check during multicast routing - * - ``reject_route`` - - ``exception`` - - Traps packets that hit reject routes (i.e., "unreachable", "prohibit") - * - ``ipv4_lpm_miss`` - - ``exception`` - - Traps unicast IPv4 packets that did not match any route - * - ``ipv6_lpm_miss`` - - ``exception`` - - Traps unicast IPv6 packets that did not match any route - -Driver-specific Packet Traps -============================ - -Device drivers can register driver-specific packet traps, but these must be -clearly documented. Such traps can correspond to device-specific exceptions and -help debug packet drops caused by these exceptions. The following list includes -links to the description of driver-specific traps registered by various device -drivers: - - * :doc:`devlink-trap-netdevsim` - -Generic Packet Trap Groups -========================== - -Generic packet trap groups are used to aggregate logically related packet -traps. These groups allow the user to batch operations such as setting the trap -action of all member traps. In addition, ``devlink-trap`` can report aggregated -per-group packets and bytes statistics, in case per-trap statistics are too -narrow. The description of these groups must be added to the following table: - -.. list-table:: List of Generic Packet Trap Groups - :widths: 10 90 - - * - Name - - Description - * - ``l2_drops`` - - Contains packet traps for packets that were dropped by the device during - layer 2 forwarding (i.e., bridge) - * - ``l3_drops`` - - Contains packet traps for packets that were dropped by the device or hit - an exception (e.g., TTL error) during layer 3 forwarding - * - ``buffer_drops`` - - Contains packet traps for packets that were dropped by the device due to - an enqueue decision - -Testing -======= - -See ``tools/testing/selftests/drivers/net/netdevsim/devlink_trap.sh`` for a -test covering the core infrastructure. Test cases should be added for any new -functionality. - -Device drivers should focus their tests on device-specific functionality, such -as the triggering of supported packet traps. diff --git a/Documentation/networking/devlink/devlink-health.txt b/Documentation/networking/devlink/devlink-health.txt new file mode 100644 index 000000000000..1db3fbea0831 --- /dev/null +++ b/Documentation/networking/devlink/devlink-health.txt @@ -0,0 +1,86 @@ +The health mechanism is targeted for Real Time Alerting, in order to know when +something bad had happened to a PCI device +- Provide alert debug information +- Self healing +- If problem needs vendor support, provide a way to gather all needed debugging + information. + +The main idea is to unify and centralize driver health reports in the +generic devlink instance and allow the user to set different +attributes of the health reporting and recovery procedures. + +The devlink health reporter: +Device driver creates a "health reporter" per each error/health type. +Error/Health type can be a known/generic (eg pci error, fw error, rx/tx error) +or unknown (driver specific). +For each registered health reporter a driver can issue error/health reports +asynchronously. All health reports handling is done by devlink. +Device driver can provide specific callbacks for each "health reporter", e.g. + - Recovery procedures + - Diagnostics and object dump procedures + - OOB initial parameters +Different parts of the driver can register different types of health reporters +with different handlers. + +Once an error is reported, devlink health will do the following actions: + * A log is being send to the kernel trace events buffer + * Health status and statistics are being updated for the reporter instance + * Object dump is being taken and saved at the reporter instance (as long as + there is no other dump which is already stored) + * Auto recovery attempt is being done. Depends on: + - Auto-recovery configuration + - Grace period vs. time passed since last recover + +The user interface: +User can access/change each reporter's parameters and driver specific callbacks +via devlink, e.g per error type (per health reporter) + - Configure reporter's generic parameters (like: disable/enable auto recovery) + - Invoke recovery procedure + - Run diagnostics + - Object dump + +The devlink health interface (via netlink): +DEVLINK_CMD_HEALTH_REPORTER_GET + Retrieves status and configuration info per DEV and reporter. +DEVLINK_CMD_HEALTH_REPORTER_SET + Allows reporter-related configuration setting. +DEVLINK_CMD_HEALTH_REPORTER_RECOVER + Triggers a reporter's recovery procedure. +DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE + Retrieves diagnostics data from a reporter on a device. +DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET + Retrieves the last stored dump. Devlink health + saves a single dump. If an dump is not already stored by the devlink + for this reporter, devlink generates a new dump. + dump output is defined by the reporter. +DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR + Clears the last saved dump file for the specified reporter. + + + netlink + +--------------------------+ + | | + | + | + | | | + +--------------------------+ + |request for ops + |(diagnose, + mlx5_core devlink |recover, + |dump) ++--------+ +--------------------------+ +| | | reporter| | +| | | +---------v----------+ | +| | ops execution | | | | +| <----------------------------------+ | | +| | | | | | +| | | + ^------------------+ | +| | | | request for ops | +| | | | (recover, dump) | +| | | | | +| | | +-+------------------+ | +| | health report | | health handler | | +| +-------------------------------> | | +| | | +--------------------+ | +| | health reporter create | | +| +----------------------------> | ++--------+ +--------------------------+ diff --git a/Documentation/networking/devlink/devlink-info-versions.rst b/Documentation/networking/devlink/devlink-info-versions.rst new file mode 100644 index 000000000000..4914f581b1fd --- /dev/null +++ b/Documentation/networking/devlink/devlink-info-versions.rst @@ -0,0 +1,64 @@ +.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) + +===================== +Devlink info versions +===================== + +board.id +======== + +Unique identifier of the board design. + +board.rev +========= + +Board design revision. + +asic.id +======= + +ASIC design identifier. + +asic.rev +======== + +ASIC design revision. + +board.manufacture +================= + +An identifier of the company or the facility which produced the part. + +fw +== + +Overall firmware version, often representing the collection of +fw.mgmt, fw.app, etc. + +fw.mgmt +======= + +Control unit firmware version. This firmware is responsible for house +keeping tasks, PHY control etc. but not the packet-by-packet data path +operation. + +fw.app +====== + +Data path microcode controlling high-speed packet processing. + +fw.undi +======= + +UNDI software, may include the UEFI driver, firmware or both. + +fw.ncsi +======= + +Version of the software responsible for supporting/handling the +Network Controller Sideband Interface. + +fw.psid +======= + +Unique identifier of the firmware parameter set. diff --git a/Documentation/networking/devlink/devlink-params-bnxt.txt b/Documentation/networking/devlink/devlink-params-bnxt.txt new file mode 100644 index 000000000000..481aa303d5b4 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-bnxt.txt @@ -0,0 +1,18 @@ +enable_sriov [DEVICE, GENERIC] + Configuration mode: Permanent + +ignore_ari [DEVICE, GENERIC] + Configuration mode: Permanent + +msix_vec_per_pf_max [DEVICE, GENERIC] + Configuration mode: Permanent + +msix_vec_per_pf_min [DEVICE, GENERIC] + Configuration mode: Permanent + +gre_ver_check [DEVICE, DRIVER-SPECIFIC] + Generic Routing Encapsulation (GRE) version check will + be enabled in the device. If disabled, device skips + version checking for incoming packets. + Type: Boolean + Configuration mode: Permanent diff --git a/Documentation/networking/devlink/devlink-params-mlx5.txt b/Documentation/networking/devlink/devlink-params-mlx5.txt new file mode 100644 index 000000000000..5071467118bd --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-mlx5.txt @@ -0,0 +1,17 @@ +flow_steering_mode [DEVICE, DRIVER-SPECIFIC] + Controls the flow steering mode of the driver. + Two modes are supported: + 1. 'dmfs' - Device managed flow steering. + 2. 'smfs - Software/Driver managed flow steering. + In DMFS mode, the HW steering entities are created and + managed through the Firmware. + In SMFS mode, the HW steering entities are created and + managed though by the driver directly into Hardware + without firmware intervention. + Type: String + Configuration mode: runtime + +enable_roce [DEVICE, GENERIC] + Enable handling of RoCE traffic in the device. + Defaultly enabled. + Configuration mode: driverinit diff --git a/Documentation/networking/devlink/devlink-params-mlxsw.txt b/Documentation/networking/devlink/devlink-params-mlxsw.txt new file mode 100644 index 000000000000..c63ea9fc7009 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-mlxsw.txt @@ -0,0 +1,10 @@ +fw_load_policy [DEVICE, GENERIC] + Configuration mode: driverinit + +acl_region_rehash_interval [DEVICE, DRIVER-SPECIFIC] + Sets an interval for periodic ACL region rehashes. + The value is in milliseconds, minimal value is "3000". + Value "0" disables the periodic work. + The first rehash will be run right after value is set. + Type: u32 + Configuration mode: runtime diff --git a/Documentation/networking/devlink/devlink-params-mv88e6xxx.txt b/Documentation/networking/devlink/devlink-params-mv88e6xxx.txt new file mode 100644 index 000000000000..21c4b3556ef2 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-mv88e6xxx.txt @@ -0,0 +1,7 @@ +ATU_hash [DEVICE, DRIVER-SPECIFIC] + Select one of four possible hashing algorithms for + MAC addresses in the Address Translation Unit. + A value of 3 seems to work better than the default of + 1 when many MAC addresses have the same OUI. + Configuration mode: runtime + Type: u8. 0-3 valid. diff --git a/Documentation/networking/devlink/devlink-params-nfp.txt b/Documentation/networking/devlink/devlink-params-nfp.txt new file mode 100644 index 000000000000..43e4d4034865 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-nfp.txt @@ -0,0 +1,5 @@ +fw_load_policy [DEVICE, GENERIC] + Configuration mode: permanent + +reset_dev_on_drv_probe [DEVICE, GENERIC] + Configuration mode: permanent diff --git a/Documentation/networking/devlink/devlink-params-ti-cpsw-switch.txt b/Documentation/networking/devlink/devlink-params-ti-cpsw-switch.txt new file mode 100644 index 000000000000..4037458499f7 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params-ti-cpsw-switch.txt @@ -0,0 +1,10 @@ +ale_bypass [DEVICE, DRIVER-SPECIFIC] + Allows to enable ALE_CONTROL(4).BYPASS mode for debug purposes. + All packets will be sent to the Host port only if enabled. + Type: bool + Configuration mode: runtime + +switch_mode [DEVICE, DRIVER-SPECIFIC] + Enable switch mode + Type: bool + Configuration mode: runtime diff --git a/Documentation/networking/devlink/devlink-params.txt b/Documentation/networking/devlink/devlink-params.txt new file mode 100644 index 000000000000..04e234e9acc9 --- /dev/null +++ b/Documentation/networking/devlink/devlink-params.txt @@ -0,0 +1,71 @@ +Devlink configuration parameters +================================ +Following is the list of configuration parameters via devlink interface. +Each parameter can be generic or driver specific and are device level +parameters. + +Note that the driver-specific files should contain the generic params +they support to, with supported config modes. + +Each parameter can be set in different configuration modes: + runtime - set while driver is running, no reset required. + driverinit - applied while driver initializes, requires restart + driver by devlink reload command. + permanent - written to device's non-volatile memory, hard reset + required. + +Following is the list of parameters: +==================================== +enable_sriov [DEVICE, GENERIC] + Enable Single Root I/O Virtualisation (SRIOV) in + the device. + Type: Boolean + +ignore_ari [DEVICE, GENERIC] + Ignore Alternative Routing-ID Interpretation (ARI) + capability. If enabled, adapter will ignore ARI + capability even when platforms has the support + enabled and creates same number of partitions when + platform does not support ARI. + Type: Boolean + +msix_vec_per_pf_max [DEVICE, GENERIC] + Provides the maximum number of MSIX interrupts that + a device can create. Value is same across all + physical functions (PFs) in the device. + Type: u32 + +msix_vec_per_pf_min [DEVICE, GENERIC] + Provides the minimum number of MSIX interrupts required + for the device initialization. Value is same across all + physical functions (PFs) in the device. + Type: u32 + +fw_load_policy [DEVICE, GENERIC] + Controls the device's firmware loading policy. + Valid values: + * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DRIVER (0) + Load firmware version preferred by the driver. + * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_FLASH (1) + Load firmware currently stored in flash. + * DEVLINK_PARAM_FW_LOAD_POLICY_VALUE_DISK (2) + Load firmware currently available on host's disk. + Type: u8 + +reset_dev_on_drv_probe [DEVICE, GENERIC] + Controls the device's reset policy on driver probe. + Valid values: + * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_UNKNOWN (0) + Unknown or invalid value. + * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_ALWAYS (1) + Always reset device on driver probe. + * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_NEVER (2) + Never reset device on driver probe. + * DEVLINK_PARAM_RESET_DEV_ON_DRV_PROBE_VALUE_DISK (3) + Reset only if device firmware can be found in the + filesystem. + Type: u8 + +enable_roce [DEVICE, GENERIC] + Enable handling of RoCE traffic in the device. + Type: Boolean diff --git a/Documentation/networking/devlink/devlink-trap-netdevsim.rst b/Documentation/networking/devlink/devlink-trap-netdevsim.rst new file mode 100644 index 000000000000..b721c9415473 --- /dev/null +++ b/Documentation/networking/devlink/devlink-trap-netdevsim.rst @@ -0,0 +1,20 @@ +.. SPDX-License-Identifier: GPL-2.0 + +====================== +Devlink Trap netdevsim +====================== + +Driver-specific Traps +===================== + +.. list-table:: List of Driver-specific Traps Registered by ``netdevsim`` + :widths: 5 5 90 + + * - Name + - Type + - Description + * - ``fid_miss`` + - ``exception`` + - When a packet enters the device it is classified to a filtering + indentifier (FID) based on the ingress port and VLAN. This trap is used + to trap packets for which a FID could not be found diff --git a/Documentation/networking/devlink/devlink-trap.rst b/Documentation/networking/devlink/devlink-trap.rst new file mode 100644 index 000000000000..03311849bfb1 --- /dev/null +++ b/Documentation/networking/devlink/devlink-trap.rst @@ -0,0 +1,270 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============ +Devlink Trap +============ + +Background +========== + +Devices capable of offloading the kernel's datapath and perform functions such +as bridging and routing must also be able to send specific packets to the +kernel (i.e., the CPU) for processing. + +For example, a device acting as a multicast-aware bridge must be able to send +IGMP membership reports to the kernel for processing by the bridge module. +Without processing such packets, the bridge module could never populate its +MDB. + +As another example, consider a device acting as router which has received an IP +packet with a TTL of 1. Upon routing the packet the device must send it to the +kernel so that it will route it as well and generate an ICMP Time Exceeded +error datagram. Without letting the kernel route such packets itself, utilities +such as ``traceroute`` could never work. + +The fundamental ability of sending certain packets to the kernel for processing +is called "packet trapping". + +Overview +======== + +The ``devlink-trap`` mechanism allows capable device drivers to register their +supported packet traps with ``devlink`` and report trapped packets to +``devlink`` for further analysis. + +Upon receiving trapped packets, ``devlink`` will perform a per-trap packets and +bytes accounting and potentially report the packet to user space via a netlink +event along with all the provided metadata (e.g., trap reason, timestamp, input +port). This is especially useful for drop traps (see :ref:`Trap-Types`) +as it allows users to obtain further visibility into packet drops that would +otherwise be invisible. + +The following diagram provides a general overview of ``devlink-trap``:: + + Netlink event: Packet w/ metadata + Or a summary of recent drops + ^ + | + Userspace | + +---------------------------------------------------+ + Kernel | + | + +-------+--------+ + | | + | drop_monitor | + | | + +-------^--------+ + | + | + | + +----+----+ + | | Kernel's Rx path + | devlink | (non-drop traps) + | | + +----^----+ ^ + | | + +-----------+ + | + +-------+-------+ + | | + | Device driver | + | | + +-------^-------+ + Kernel | + +---------------------------------------------------+ + Hardware | + | Trapped packet + | + +--+---+ + | | + | ASIC | + | | + +------+ + +.. _Trap-Types: + +Trap Types +========== + +The ``devlink-trap`` mechanism supports the following packet trap types: + + * ``drop``: Trapped packets were dropped by the underlying device. Packets + are only processed by ``devlink`` and not injected to the kernel's Rx path. + The trap action (see :ref:`Trap-Actions`) can be changed. + * ``exception``: Trapped packets were not forwarded as intended by the + underlying device due to an exception (e.g., TTL error, missing neighbour + entry) and trapped to the control plane for resolution. Packets are + processed by ``devlink`` and injected to the kernel's Rx path. Changing the + action of such traps is not allowed, as it can easily break the control + plane. + +.. _Trap-Actions: + +Trap Actions +============ + +The ``devlink-trap`` mechanism supports the following packet trap actions: + + * ``trap``: The sole copy of the packet is sent to the CPU. + * ``drop``: The packet is dropped by the underlying device and a copy is not + sent to the CPU. + +Generic Packet Traps +==================== + +Generic packet traps are used to describe traps that trap well-defined packets +or packets that are trapped due to well-defined conditions (e.g., TTL error). +Such traps can be shared by multiple device drivers and their description must +be added to the following table: + +.. list-table:: List of Generic Packet Traps + :widths: 5 5 90 + + * - Name + - Type + - Description + * - ``source_mac_is_multicast`` + - ``drop`` + - Traps incoming packets that the device decided to drop because of a + multicast source MAC + * - ``vlan_tag_mismatch`` + - ``drop`` + - Traps incoming packets that the device decided to drop in case of VLAN + tag mismatch: The ingress bridge port is not configured with a PVID and + the packet is untagged or prio-tagged + * - ``ingress_vlan_filter`` + - ``drop`` + - Traps incoming packets that the device decided to drop in case they are + tagged with a VLAN that is not configured on the ingress bridge port + * - ``ingress_spanning_tree_filter`` + - ``drop`` + - Traps incoming packets that the device decided to drop in case the STP + state of the ingress bridge port is not "forwarding" + * - ``port_list_is_empty`` + - ``drop`` + - Traps packets that the device decided to drop in case they need to be + flooded (e.g., unknown unicast, unregistered multicast) and there are + no ports the packets should be flooded to + * - ``port_loopback_filter`` + - ``drop`` + - Traps packets that the device decided to drop in case after layer 2 + forwarding the only port from which they should be transmitted through + is the port from which they were received + * - ``blackhole_route`` + - ``drop`` + - Traps packets that the device decided to drop in case they hit a + blackhole route + * - ``ttl_value_is_too_small`` + - ``exception`` + - Traps unicast packets that should be forwarded by the device whose TTL + was decremented to 0 or less + * - ``tail_drop`` + - ``drop`` + - Traps packets that the device decided to drop because they could not be + enqueued to a transmission queue which is full + * - ``non_ip`` + - ``drop`` + - Traps packets that the device decided to drop because they need to + undergo a layer 3 lookup, but are not IP or MPLS packets + * - ``uc_dip_over_mc_dmac`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and they have a unicast destination IP and a multicast destination + MAC + * - ``dip_is_loopback_address`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and their destination IP is the loopback address (i.e., 127.0.0.0/8 + and ::1/128) + * - ``sip_is_mc`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and their source IP is multicast (i.e., 224.0.0.0/8 and ff::/8) + * - ``sip_is_loopback_address`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and their source IP is the loopback address (i.e., 127.0.0.0/8 and ::1/128) + * - ``ip_header_corrupted`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and their IP header is corrupted: wrong checksum, wrong IP version + or too short Internet Header Length (IHL) + * - ``ipv4_sip_is_limited_bc`` + - ``drop`` + - Traps packets that the device decided to drop because they need to be + routed and their source IP is limited broadcast (i.e., 255.255.255.255/32) + * - ``ipv6_mc_dip_reserved_scope`` + - ``drop`` + - Traps IPv6 packets that the device decided to drop because they need to + be routed and their IPv6 multicast destination IP has a reserved scope + (i.e., ffx0::/16) + * - ``ipv6_mc_dip_interface_local_scope`` + - ``drop`` + - Traps IPv6 packets that the device decided to drop because they need to + be routed and their IPv6 multicast destination IP has an interface-local scope + (i.e., ffx1::/16) + * - ``mtu_value_is_too_small`` + - ``exception`` + - Traps packets that should have been routed by the device, but were bigger + than the MTU of the egress interface + * - ``unresolved_neigh`` + - ``exception`` + - Traps packets that did not have a matching IP neighbour after routing + * - ``mc_reverse_path_forwarding`` + - ``exception`` + - Traps multicast IP packets that failed reverse-path forwarding (RPF) + check during multicast routing + * - ``reject_route`` + - ``exception`` + - Traps packets that hit reject routes (i.e., "unreachable", "prohibit") + * - ``ipv4_lpm_miss`` + - ``exception`` + - Traps unicast IPv4 packets that did not match any route + * - ``ipv6_lpm_miss`` + - ``exception`` + - Traps unicast IPv6 packets that did not match any route + +Driver-specific Packet Traps +============================ + +Device drivers can register driver-specific packet traps, but these must be +clearly documented. Such traps can correspond to device-specific exceptions and +help debug packet drops caused by these exceptions. The following list includes +links to the description of driver-specific traps registered by various device +drivers: + + * :doc:`devlink-trap-netdevsim` + +Generic Packet Trap Groups +========================== + +Generic packet trap groups are used to aggregate logically related packet +traps. These groups allow the user to batch operations such as setting the trap +action of all member traps. In addition, ``devlink-trap`` can report aggregated +per-group packets and bytes statistics, in case per-trap statistics are too +narrow. The description of these groups must be added to the following table: + +.. list-table:: List of Generic Packet Trap Groups + :widths: 10 90 + + * - Name + - Description + * - ``l2_drops`` + - Contains packet traps for packets that were dropped by the device during + layer 2 forwarding (i.e., bridge) + * - ``l3_drops`` + - Contains packet traps for packets that were dropped by the device or hit + an exception (e.g., TTL error) during layer 3 forwarding + * - ``buffer_drops`` + - Contains packet traps for packets that were dropped by the device due to + an enqueue decision + +Testing +======= + +See ``tools/testing/selftests/drivers/net/netdevsim/devlink_trap.sh`` for a +test covering the core infrastructure. Test cases should be added for any new +functionality. + +Device drivers should focus their tests on device-specific functionality, such +as the triggering of supported packet traps. diff --git a/Documentation/networking/devlink/index.rst b/Documentation/networking/devlink/index.rst new file mode 100644 index 000000000000..1252c2a1b680 --- /dev/null +++ b/Documentation/networking/devlink/index.rst @@ -0,0 +1,14 @@ +Linux Devlink Documentation +=========================== + +devlink is an API to expose device information and resources not directly +related to any device class, such as chip-wide/switch-ASIC-wide configuration. + +Contents: + +.. toctree:: + :maxdepth: 1 + + devlink-info-versions + devlink-trap + devlink-trap-netdevsim diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst index bee73be7af93..d07d9855dcd3 100644 --- a/Documentation/networking/index.rst +++ b/Documentation/networking/index.rst @@ -13,9 +13,7 @@ Contents: can_ucan_protocol device_drivers/index dsa/index - devlink-info-versions - devlink-trap - devlink-trap-netdevsim + devlink/index ethtool-netlink ieee802154 j1939 diff --git a/MAINTAINERS b/MAINTAINERS index 9dcb9cab5705..d0ea00e3e8b1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4848,6 +4848,7 @@ S: Supported F: net/core/devlink.c F: include/net/devlink.h F: include/uapi/linux/devlink.h +F: Documentation/networking/devlink DIALOG SEMICONDUCTOR DRIVERS M: Support Opensource diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c index 059711edfc61..6663f79fe5d1 100644 --- a/drivers/net/netdevsim/dev.c +++ b/drivers/net/netdevsim/dev.c @@ -270,7 +270,7 @@ struct nsim_trap_data { }; /* All driver-specific traps must be documented in - * Documentation/networking/devlink-trap-netdevsim.rst + * Documentation/networking/devlink/devlink-trap-netdevsim.rst */ enum { NSIM_TRAP_ID_BASE = DEVLINK_TRAP_GENERIC_ID_MAX, diff --git a/include/net/devlink.h b/include/net/devlink.h index 4e80d9acdb86..a6856f1d5d1f 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -564,7 +564,7 @@ struct devlink_trap { }; /* All traps must be documented in - * Documentation/networking/devlink-trap.rst + * Documentation/networking/devlink/devlink-trap.rst */ enum devlink_trap_generic_id { DEVLINK_TRAP_GENERIC_ID_SMAC_MC, @@ -598,7 +598,7 @@ enum devlink_trap_generic_id { }; /* All trap groups must be documented in - * Documentation/networking/devlink-trap.rst + * Documentation/networking/devlink/devlink-trap.rst */ enum devlink_trap_group_generic_id { DEVLINK_TRAP_GROUP_GENERIC_ID_L2_DROPS, -- cgit v1.2.3-71-gd317 From a442c2c3850dc308ab972f3d10d1077e2c8fd035 Mon Sep 17 00:00:00 2001 From: Jonathan Lemon Date: Thu, 9 Jan 2020 11:23:17 -0800 Subject: mlx4: Bump up MAX_MSIX from 64 to 128 On modern hardware with a large number of cpus and using XDP, the current MSIX limit is insufficient. Bump the limit in order to allow more queues. Signed-off-by: Jonathan Lemon Reviewed-by: Jack Wang Reviewed-by: Tariq Toukan Signed-off-by: Jakub Kicinski --- include/linux/mlx4/device.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h index 36e412c3d657..20372de0b587 100644 --- a/include/linux/mlx4/device.h +++ b/include/linux/mlx4/device.h @@ -47,7 +47,7 @@ #define DEFAULT_UAR_PAGE_SHIFT 12 #define MAX_MSIX_P_PORT 17 -#define MAX_MSIX 64 +#define MAX_MSIX 128 #define MIN_MSIX_P_PORT 5 #define MLX4_IS_LEGACY_EQ_MODE(dev_cap) ((dev_cap).num_comp_vectors < \ (dev_cap).num_ports * MIN_MSIX_P_PORT) -- cgit v1.2.3-71-gd317 From c74f16b6034401b17bb1cf549871186a8ece5f92 Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Sun, 12 Jan 2020 13:04:43 +0100 Subject: wan: ixp4xx_hss: prepare compile testing The ixp4xx_hss driver needs the platform data definition and the system clock rate to be compiled. Move both into a new platform_data header file. This is a prerequisite for compile testing, but turning on compile testing requires further patches to isolate the SoC headers. Signed-off-by: Arnd Bergmann Signed-off-by: Linus Walleij Signed-off-by: Jakub Kicinski --- arch/arm/mach-ixp4xx/goramo_mlr.c | 4 ++++ arch/arm/mach-ixp4xx/include/mach/platform.h | 9 ------- drivers/net/wan/Kconfig | 3 ++- drivers/net/wan/ixp4xx_hss.c | 35 ++++++++++++++++------------ include/linux/platform_data/wan_ixp4xx_hss.h | 17 ++++++++++++++ 5 files changed, 43 insertions(+), 25 deletions(-) create mode 100644 include/linux/platform_data/wan_ixp4xx_hss.h (limited to 'include') diff --git a/arch/arm/mach-ixp4xx/goramo_mlr.c b/arch/arm/mach-ixp4xx/goramo_mlr.c index a0e0b6b7dc5c..93b7afeee142 100644 --- a/arch/arm/mach-ixp4xx/goramo_mlr.c +++ b/arch/arm/mach-ixp4xx/goramo_mlr.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -405,6 +406,9 @@ static void __init gmlr_init(void) if (hw_bits & CFG_HW_HAS_HSS1) device_tab[devices++] = &device_hss_tab[1]; /* max index 5 */ + hss_plat[0].timer_freq = ixp4xx_timer_freq; + hss_plat[1].timer_freq = ixp4xx_timer_freq; + gpio_request(GPIO_SCL, "SCL/clock"); gpio_request(GPIO_SDA, "SDA/data"); gpio_request(GPIO_STR, "strobe"); diff --git a/arch/arm/mach-ixp4xx/include/mach/platform.h b/arch/arm/mach-ixp4xx/include/mach/platform.h index 342acbe20f7c..04ef8025accc 100644 --- a/arch/arm/mach-ixp4xx/include/mach/platform.h +++ b/arch/arm/mach-ixp4xx/include/mach/platform.h @@ -104,15 +104,6 @@ struct eth_plat_info { u8 hwaddr[6]; }; -/* Information about built-in HSS (synchronous serial) interfaces */ -struct hss_plat_info { - int (*set_clock)(int port, unsigned int clock_type); - int (*open)(int port, void *pdev, - void (*set_carrier_cb)(void *pdev, int carrier)); - void (*close)(int port, void *pdev); - u8 txreadyq; -}; - /* * Frequency of clock used for primary clocksource */ diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig index dd1a147f2971..4530840e15ef 100644 --- a/drivers/net/wan/Kconfig +++ b/drivers/net/wan/Kconfig @@ -315,7 +315,8 @@ config DSCC4_PCI_RST config IXP4XX_HSS tristate "Intel IXP4xx HSS (synchronous serial port) support" - depends on HDLC && ARM && ARCH_IXP4XX && IXP4XX_NPE && IXP4XX_QMGR + depends on HDLC && IXP4XX_NPE && IXP4XX_QMGR + depends on ARCH_IXP4XX help Say Y here if you want to use built-in HSS ports on IXP4xx processor. diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/ixp4xx_hss.c index e7619cec978a..7c5cf77e9ef1 100644 --- a/drivers/net/wan/ixp4xx_hss.c +++ b/drivers/net/wan/ixp4xx_hss.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -1182,14 +1183,14 @@ static int hss_hdlc_attach(struct net_device *dev, unsigned short encoding, } } -static u32 check_clock(u32 rate, u32 a, u32 b, u32 c, +static u32 check_clock(u32 timer_freq, u32 rate, u32 a, u32 b, u32 c, u32 *best, u32 *best_diff, u32 *reg) { /* a is 10-bit, b is 10-bit, c is 12-bit */ u64 new_rate; u32 new_diff; - new_rate = ixp4xx_timer_freq * (u64)(c + 1); + new_rate = timer_freq * (u64)(c + 1); do_div(new_rate, a * (c + 1) + b + 1); new_diff = abs((u32)new_rate - rate); @@ -1201,40 +1202,43 @@ static u32 check_clock(u32 rate, u32 a, u32 b, u32 c, return new_diff; } -static void find_best_clock(u32 rate, u32 *best, u32 *reg) +static void find_best_clock(u32 timer_freq, u32 rate, u32 *best, u32 *reg) { u32 a, b, diff = 0xFFFFFFFF; - a = ixp4xx_timer_freq / rate; + a = timer_freq / rate; if (a > 0x3FF) { /* 10-bit value - we can go as slow as ca. 65 kb/s */ - check_clock(rate, 0x3FF, 1, 1, best, &diff, reg); + check_clock(timer_freq, rate, 0x3FF, 1, 1, best, &diff, reg); return; } if (a == 0) { /* > 66.666 MHz */ a = 1; /* minimum divider is 1 (a = 0, b = 1, c = 1) */ - rate = ixp4xx_timer_freq; + rate = timer_freq; } - if (rate * a == ixp4xx_timer_freq) { /* don't divide by 0 later */ - check_clock(rate, a - 1, 1, 1, best, &diff, reg); + if (rate * a == timer_freq) { /* don't divide by 0 later */ + check_clock(timer_freq, rate, a - 1, 1, 1, best, &diff, reg); return; } for (b = 0; b < 0x400; b++) { u64 c = (b + 1) * (u64)rate; - do_div(c, ixp4xx_timer_freq - rate * a); + do_div(c, timer_freq - rate * a); c--; if (c >= 0xFFF) { /* 12-bit - no need to check more 'b's */ if (b == 0 && /* also try a bit higher rate */ - !check_clock(rate, a - 1, 1, 1, best, &diff, reg)) + !check_clock(timer_freq, rate, a - 1, 1, 1, best, + &diff, reg)) return; - check_clock(rate, a, b, 0xFFF, best, &diff, reg); + check_clock(timer_freq, rate, a, b, 0xFFF, best, + &diff, reg); return; } - if (!check_clock(rate, a, b, c, best, &diff, reg)) + if (!check_clock(timer_freq, rate, a, b, c, best, &diff, reg)) return; - if (!check_clock(rate, a, b, c + 1, best, &diff, reg)) + if (!check_clock(timer_freq, rate, a, b, c + 1, best, &diff, + reg)) return; } } @@ -1285,8 +1289,9 @@ static int hss_hdlc_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) port->clock_type = clk; /* Update settings */ if (clk == CLOCK_INT) - find_best_clock(new_line.clock_rate, &port->clock_rate, - &port->clock_reg); + find_best_clock(port->plat->timer_freq, + new_line.clock_rate, + &port->clock_rate, &port->clock_reg); else { port->clock_rate = 0; port->clock_reg = CLK42X_SPEED_2048KHZ; diff --git a/include/linux/platform_data/wan_ixp4xx_hss.h b/include/linux/platform_data/wan_ixp4xx_hss.h new file mode 100644 index 000000000000..d525a0feb9e1 --- /dev/null +++ b/include/linux/platform_data/wan_ixp4xx_hss.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __PLATFORM_DATA_WAN_IXP4XX_HSS_H +#define __PLATFORM_DATA_WAN_IXP4XX_HSS_H + +#include + +/* Information about built-in HSS (synchronous serial) interfaces */ +struct hss_plat_info { + int (*set_clock)(int port, unsigned int clock_type); + int (*open)(int port, void *pdev, + void (*set_carrier_cb)(void *pdev, int carrier)); + void (*close)(int port, void *pdev); + u8 txreadyq; + u32 timer_freq; +}; + +#endif -- cgit v1.2.3-71-gd317 From a41a5b26d29fb0123cd3290dca453857cd8c0c66 Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Sun, 12 Jan 2020 13:04:45 +0100 Subject: ixp4xx_eth: move platform_data definition The platform data is needed to compile the driver as standalone, so move it to a global location along with similar files. Signed-off-by: Arnd Bergmann Signed-off-by: Linus Walleij Signed-off-by: Jakub Kicinski --- arch/arm/mach-ixp4xx/include/mach/platform.h | 13 +----- drivers/net/ethernet/xscale/ixp46x_ts.h | 68 ++++++++++++++++++++++++++++ drivers/net/ethernet/xscale/ixp4xx_eth.c | 1 + drivers/net/ethernet/xscale/ptp_ixp46x.h | 68 ---------------------------- include/linux/platform_data/eth_ixp4xx.h | 19 ++++++++ 5 files changed, 89 insertions(+), 80 deletions(-) create mode 100644 drivers/net/ethernet/xscale/ixp46x_ts.h delete mode 100644 drivers/net/ethernet/xscale/ptp_ixp46x.h create mode 100644 include/linux/platform_data/eth_ixp4xx.h (limited to 'include') diff --git a/arch/arm/mach-ixp4xx/include/mach/platform.h b/arch/arm/mach-ixp4xx/include/mach/platform.h index 04ef8025accc..6d403fe0bf52 100644 --- a/arch/arm/mach-ixp4xx/include/mach/platform.h +++ b/arch/arm/mach-ixp4xx/include/mach/platform.h @@ -15,6 +15,7 @@ #ifndef __ASSEMBLY__ #include +#include #include @@ -92,18 +93,6 @@ struct ixp4xx_pata_data { void __iomem *cs1; }; -#define IXP4XX_ETH_NPEA 0x00 -#define IXP4XX_ETH_NPEB 0x10 -#define IXP4XX_ETH_NPEC 0x20 - -/* Information about built-in Ethernet MAC interfaces */ -struct eth_plat_info { - u8 phy; /* MII PHY ID, 0 - 31 */ - u8 rxq; /* configurable, currently 0 - 31 only */ - u8 txreadyq; - u8 hwaddr[6]; -}; - /* * Frequency of clock used for primary clocksource */ diff --git a/drivers/net/ethernet/xscale/ixp46x_ts.h b/drivers/net/ethernet/xscale/ixp46x_ts.h new file mode 100644 index 000000000000..d792130e27b0 --- /dev/null +++ b/drivers/net/ethernet/xscale/ixp46x_ts.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * PTP 1588 clock using the IXP46X + * + * Copyright (C) 2010 OMICRON electronics GmbH + */ + +#ifndef _IXP46X_TS_H_ +#define _IXP46X_TS_H_ + +#define DEFAULT_ADDEND 0xF0000029 +#define TICKS_NS_SHIFT 4 + +struct ixp46x_channel_ctl { + u32 ch_control; /* 0x40 Time Synchronization Channel Control */ + u32 ch_event; /* 0x44 Time Synchronization Channel Event */ + u32 tx_snap_lo; /* 0x48 Transmit Snapshot Low Register */ + u32 tx_snap_hi; /* 0x4C Transmit Snapshot High Register */ + u32 rx_snap_lo; /* 0x50 Receive Snapshot Low Register */ + u32 rx_snap_hi; /* 0x54 Receive Snapshot High Register */ + u32 src_uuid_lo; /* 0x58 Source UUID0 Low Register */ + u32 src_uuid_hi; /* 0x5C Sequence Identifier/Source UUID0 High */ +}; + +struct ixp46x_ts_regs { + u32 control; /* 0x00 Time Sync Control Register */ + u32 event; /* 0x04 Time Sync Event Register */ + u32 addend; /* 0x08 Time Sync Addend Register */ + u32 accum; /* 0x0C Time Sync Accumulator Register */ + u32 test; /* 0x10 Time Sync Test Register */ + u32 unused; /* 0x14 */ + u32 rsystime_lo; /* 0x18 RawSystemTime_Low Register */ + u32 rsystime_hi; /* 0x1C RawSystemTime_High Register */ + u32 systime_lo; /* 0x20 SystemTime_Low Register */ + u32 systime_hi; /* 0x24 SystemTime_High Register */ + u32 trgt_lo; /* 0x28 TargetTime_Low Register */ + u32 trgt_hi; /* 0x2C TargetTime_High Register */ + u32 asms_lo; /* 0x30 Auxiliary Slave Mode Snapshot Low */ + u32 asms_hi; /* 0x34 Auxiliary Slave Mode Snapshot High */ + u32 amms_lo; /* 0x38 Auxiliary Master Mode Snapshot Low */ + u32 amms_hi; /* 0x3C Auxiliary Master Mode Snapshot High */ + + struct ixp46x_channel_ctl channel[3]; +}; + +/* 0x00 Time Sync Control Register Bits */ +#define TSCR_AMM (1<<3) +#define TSCR_ASM (1<<2) +#define TSCR_TTM (1<<1) +#define TSCR_RST (1<<0) + +/* 0x04 Time Sync Event Register Bits */ +#define TSER_SNM (1<<3) +#define TSER_SNS (1<<2) +#define TTIPEND (1<<1) + +/* 0x40 Time Synchronization Channel Control Register Bits */ +#define MASTER_MODE (1<<0) +#define TIMESTAMP_ALL (1<<1) + +/* 0x44 Time Synchronization Channel Event Register Bits */ +#define TX_SNAPSHOT_LOCKED (1<<0) +#define RX_SNAPSHOT_LOCKED (1<<1) + +/* The ptp_ixp46x module will set this variable */ +extern int ixp46x_phc_index; + +#endif diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c index 0075ecdb21f4..e811bf0d23cb 100644 --- a/drivers/net/ethernet/xscale/ixp4xx_eth.c +++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/net/ethernet/xscale/ptp_ixp46x.h b/drivers/net/ethernet/xscale/ptp_ixp46x.h deleted file mode 100644 index d792130e27b0..000000000000 --- a/drivers/net/ethernet/xscale/ptp_ixp46x.h +++ /dev/null @@ -1,68 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * PTP 1588 clock using the IXP46X - * - * Copyright (C) 2010 OMICRON electronics GmbH - */ - -#ifndef _IXP46X_TS_H_ -#define _IXP46X_TS_H_ - -#define DEFAULT_ADDEND 0xF0000029 -#define TICKS_NS_SHIFT 4 - -struct ixp46x_channel_ctl { - u32 ch_control; /* 0x40 Time Synchronization Channel Control */ - u32 ch_event; /* 0x44 Time Synchronization Channel Event */ - u32 tx_snap_lo; /* 0x48 Transmit Snapshot Low Register */ - u32 tx_snap_hi; /* 0x4C Transmit Snapshot High Register */ - u32 rx_snap_lo; /* 0x50 Receive Snapshot Low Register */ - u32 rx_snap_hi; /* 0x54 Receive Snapshot High Register */ - u32 src_uuid_lo; /* 0x58 Source UUID0 Low Register */ - u32 src_uuid_hi; /* 0x5C Sequence Identifier/Source UUID0 High */ -}; - -struct ixp46x_ts_regs { - u32 control; /* 0x00 Time Sync Control Register */ - u32 event; /* 0x04 Time Sync Event Register */ - u32 addend; /* 0x08 Time Sync Addend Register */ - u32 accum; /* 0x0C Time Sync Accumulator Register */ - u32 test; /* 0x10 Time Sync Test Register */ - u32 unused; /* 0x14 */ - u32 rsystime_lo; /* 0x18 RawSystemTime_Low Register */ - u32 rsystime_hi; /* 0x1C RawSystemTime_High Register */ - u32 systime_lo; /* 0x20 SystemTime_Low Register */ - u32 systime_hi; /* 0x24 SystemTime_High Register */ - u32 trgt_lo; /* 0x28 TargetTime_Low Register */ - u32 trgt_hi; /* 0x2C TargetTime_High Register */ - u32 asms_lo; /* 0x30 Auxiliary Slave Mode Snapshot Low */ - u32 asms_hi; /* 0x34 Auxiliary Slave Mode Snapshot High */ - u32 amms_lo; /* 0x38 Auxiliary Master Mode Snapshot Low */ - u32 amms_hi; /* 0x3C Auxiliary Master Mode Snapshot High */ - - struct ixp46x_channel_ctl channel[3]; -}; - -/* 0x00 Time Sync Control Register Bits */ -#define TSCR_AMM (1<<3) -#define TSCR_ASM (1<<2) -#define TSCR_TTM (1<<1) -#define TSCR_RST (1<<0) - -/* 0x04 Time Sync Event Register Bits */ -#define TSER_SNM (1<<3) -#define TSER_SNS (1<<2) -#define TTIPEND (1<<1) - -/* 0x40 Time Synchronization Channel Control Register Bits */ -#define MASTER_MODE (1<<0) -#define TIMESTAMP_ALL (1<<1) - -/* 0x44 Time Synchronization Channel Event Register Bits */ -#define TX_SNAPSHOT_LOCKED (1<<0) -#define RX_SNAPSHOT_LOCKED (1<<1) - -/* The ptp_ixp46x module will set this variable */ -extern int ixp46x_phc_index; - -#endif diff --git a/include/linux/platform_data/eth_ixp4xx.h b/include/linux/platform_data/eth_ixp4xx.h new file mode 100644 index 000000000000..6f652ea0c6ae --- /dev/null +++ b/include/linux/platform_data/eth_ixp4xx.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __PLATFORM_DATA_ETH_IXP4XX +#define __PLATFORM_DATA_ETH_IXP4XX + +#include + +#define IXP4XX_ETH_NPEA 0x00 +#define IXP4XX_ETH_NPEB 0x10 +#define IXP4XX_ETH_NPEC 0x20 + +/* Information about built-in Ethernet MAC interfaces */ +struct eth_plat_info { + u8 phy; /* MII PHY ID, 0 - 31 */ + u8 rxq; /* configurable, currently 0 - 31 only */ + u8 txreadyq; + u8 hwaddr[6]; +}; + +#endif -- cgit v1.2.3-71-gd317 From 0eac8ce95bb386838121189b2aa2216cd070f143 Mon Sep 17 00:00:00 2001 From: Jesper Dangaard Brouer Date: Mon, 13 Jan 2020 11:22:16 +0100 Subject: ptr_ring: add include of linux/mm.h Commit 0bf7800f1799 ("ptr_ring: try vmalloc() when kmalloc() fails") started to use kvmalloc_array and kvfree, which are defined in mm.h, the previous functions kcalloc and kfree, which are defined in slab.h. Add the missing include of linux/mm.h. This went unnoticed as other include files happened to include mm.h. Fixes: 0bf7800f1799 ("ptr_ring: try vmalloc() when kmalloc() fails") Signed-off-by: Jesper Dangaard Brouer Acked-by: Michael S. Tsirkin Signed-off-by: Jakub Kicinski --- include/linux/ptr_ring.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index 0abe9a4fc842..417db0a79a62 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -23,6 +23,7 @@ #include #include #include +#include #include #endif -- cgit v1.2.3-71-gd317 From 579a25a854d482bc9d0f9ab0e07ba32fb66bd9e3 Mon Sep 17 00:00:00 2001 From: Jose Abreu Date: Mon, 13 Jan 2020 17:24:09 +0100 Subject: net: stmmac: Initial support for TBS Adds the initial hooks for TBS support. This needs a 32 byte descriptor in order for it to work with current HW. Adds all the logic for Enhanced Descriptors in main core but no HW related logic for now. Changes from v2: - Use bitfield for TBS status / support (Jakub) - Remove unneeded cache alignment (Jakub) - Fix checkpatch issues Signed-off-by: Jose Abreu Signed-off-by: Jakub Kicinski --- drivers/net/ethernet/stmicro/stmmac/descs.h | 9 + drivers/net/ethernet/stmicro/stmmac/hwif.h | 7 + drivers/net/ethernet/stmicro/stmmac/stmmac.h | 5 + drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 199 +++++++++++++++------- include/linux/stmmac.h | 1 + 5 files changed, 156 insertions(+), 65 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/stmicro/stmmac/descs.h b/drivers/net/ethernet/stmicro/stmmac/descs.h index 9f0b9a9e63b3..49d6a866244f 100644 --- a/drivers/net/ethernet/stmicro/stmmac/descs.h +++ b/drivers/net/ethernet/stmicro/stmmac/descs.h @@ -171,6 +171,15 @@ struct dma_extended_desc { __le32 des7; /* Tx/Rx Timestamp High */ }; +/* Enhanced descriptor for TBS */ +struct dma_edesc { + __le32 des4; + __le32 des5; + __le32 des6; + __le32 des7; + struct dma_desc basic; +}; + /* Transmit checksum insertion control */ #define TX_CIC_FULL 3 /* Include IP header and pseudoheader */ diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h index 905a6f0edaca..71c23cbd7af8 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.h +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h @@ -29,6 +29,7 @@ struct stmmac_extra_stats; struct stmmac_safety_stats; struct dma_desc; struct dma_extended_desc; +struct dma_edesc; /* Descriptors helpers */ struct stmmac_desc_ops { @@ -95,6 +96,7 @@ struct stmmac_desc_ops { void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag, u32 inner_type); void (*set_vlan)(struct dma_desc *p, u32 type); + void (*set_tbs)(struct dma_edesc *p, u32 sec, u32 nsec); }; #define stmmac_init_rx_desc(__priv, __args...) \ @@ -157,6 +159,8 @@ struct stmmac_desc_ops { stmmac_do_void_callback(__priv, desc, set_vlan_tag, __args) #define stmmac_set_desc_vlan(__priv, __args...) \ stmmac_do_void_callback(__priv, desc, set_vlan, __args) +#define stmmac_set_desc_tbs(__priv, __args...) \ + stmmac_do_void_callback(__priv, desc, set_tbs, __args) struct stmmac_dma_cfg; struct dma_features; @@ -210,6 +214,7 @@ struct stmmac_dma_ops { void (*qmode)(void __iomem *ioaddr, u32 channel, u8 qmode); void (*set_bfsize)(void __iomem *ioaddr, int bfsize, u32 chan); void (*enable_sph)(void __iomem *ioaddr, bool en, u32 chan); + int (*enable_tbs)(void __iomem *ioaddr, bool en, u32 chan); }; #define stmmac_reset(__priv, __args...) \ @@ -268,6 +273,8 @@ struct stmmac_dma_ops { stmmac_do_void_callback(__priv, dma, set_bfsize, __args) #define stmmac_enable_sph(__priv, __args...) \ stmmac_do_void_callback(__priv, dma, enable_sph, __args) +#define stmmac_enable_tbs(__priv, __args...) \ + stmmac_do_callback(__priv, dma, enable_tbs, __args) struct mac_device_info; struct net_device; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index f98c5eefb382..9c02fc754bf1 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -39,13 +39,18 @@ struct stmmac_tx_info { bool is_jumbo; }; +#define STMMAC_TBS_AVAIL BIT(0) +#define STMMAC_TBS_EN BIT(1) + /* Frequently used values are kept adjacent for cache effect */ struct stmmac_tx_queue { u32 tx_count_frames; + int tbs; struct timer_list txtimer; u32 queue_index; struct stmmac_priv *priv_data; struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp; + struct dma_edesc *dma_entx; struct dma_desc *dma_tx; struct sk_buff **tx_skbuff; struct stmmac_tx_info *tx_skbuff_dma; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index f113138df0d9..baffb4e8d99a 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1090,6 +1090,8 @@ static void stmmac_display_tx_rings(struct stmmac_priv *priv) if (priv->extend_desc) head_tx = (void *)tx_q->dma_etx; + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + head_tx = (void *)tx_q->dma_entx; else head_tx = (void *)tx_q->dma_tx; @@ -1163,13 +1165,19 @@ static void stmmac_clear_tx_descriptors(struct stmmac_priv *priv, u32 queue) int i; /* Clear the TX descriptors */ - for (i = 0; i < DMA_TX_SIZE; i++) + for (i = 0; i < DMA_TX_SIZE; i++) { + int last = (i == (DMA_TX_SIZE - 1)); + struct dma_desc *p; + if (priv->extend_desc) - stmmac_init_tx_desc(priv, &tx_q->dma_etx[i].basic, - priv->mode, (i == DMA_TX_SIZE - 1)); + p = &tx_q->dma_etx[i].basic; + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + p = &tx_q->dma_entx[i].basic; else - stmmac_init_tx_desc(priv, &tx_q->dma_tx[i], - priv->mode, (i == DMA_TX_SIZE - 1)); + p = &tx_q->dma_tx[i]; + + stmmac_init_tx_desc(priv, p, priv->mode, last); + } } /** @@ -1383,7 +1391,7 @@ static int init_dma_tx_desc_rings(struct net_device *dev) if (priv->extend_desc) stmmac_mode_init(priv, tx_q->dma_etx, tx_q->dma_tx_phy, DMA_TX_SIZE, 1); - else + else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) stmmac_mode_init(priv, tx_q->dma_tx, tx_q->dma_tx_phy, DMA_TX_SIZE, 0); } @@ -1392,6 +1400,8 @@ static int init_dma_tx_desc_rings(struct net_device *dev) struct dma_desc *p; if (priv->extend_desc) p = &((tx_q->dma_etx + i)->basic); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + p = &((tx_q->dma_entx + i)->basic); else p = tx_q->dma_tx + i; @@ -1511,19 +1521,26 @@ static void free_dma_tx_desc_resources(struct stmmac_priv *priv) /* Free TX queue resources */ for (queue = 0; queue < tx_count; queue++) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + size_t size; + void *addr; /* Release the DMA TX socket buffers */ dma_free_tx_skbufs(priv, queue); - /* Free DMA regions of consistent memory previously allocated */ - if (!priv->extend_desc) - dma_free_coherent(priv->device, - DMA_TX_SIZE * sizeof(struct dma_desc), - tx_q->dma_tx, tx_q->dma_tx_phy); - else - dma_free_coherent(priv->device, DMA_TX_SIZE * - sizeof(struct dma_extended_desc), - tx_q->dma_etx, tx_q->dma_tx_phy); + if (priv->extend_desc) { + size = sizeof(struct dma_extended_desc); + addr = tx_q->dma_etx; + } else if (tx_q->tbs & STMMAC_TBS_AVAIL) { + size = sizeof(struct dma_edesc); + addr = tx_q->dma_entx; + } else { + size = sizeof(struct dma_desc); + addr = tx_q->dma_tx; + } + + size *= DMA_TX_SIZE; + + dma_free_coherent(priv->device, size, addr, tx_q->dma_tx_phy); kfree(tx_q->tx_skbuff_dma); kfree(tx_q->tx_skbuff); @@ -1616,6 +1633,8 @@ static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) /* TX queues buffers and DMA */ for (queue = 0; queue < tx_count; queue++) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + size_t size; + void *addr; tx_q->queue_index = queue; tx_q->priv_data = priv; @@ -1632,28 +1651,32 @@ static int alloc_dma_tx_desc_resources(struct stmmac_priv *priv) if (!tx_q->tx_skbuff) goto err_dma; - if (priv->extend_desc) { - tx_q->dma_etx = dma_alloc_coherent(priv->device, - DMA_TX_SIZE * sizeof(struct dma_extended_desc), - &tx_q->dma_tx_phy, - GFP_KERNEL); - if (!tx_q->dma_etx) - goto err_dma; - } else { - tx_q->dma_tx = dma_alloc_coherent(priv->device, - DMA_TX_SIZE * sizeof(struct dma_desc), - &tx_q->dma_tx_phy, - GFP_KERNEL); - if (!tx_q->dma_tx) - goto err_dma; - } + if (priv->extend_desc) + size = sizeof(struct dma_extended_desc); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + size = sizeof(struct dma_edesc); + else + size = sizeof(struct dma_desc); + + size *= DMA_TX_SIZE; + + addr = dma_alloc_coherent(priv->device, size, + &tx_q->dma_tx_phy, GFP_KERNEL); + if (!addr) + goto err_dma; + + if (priv->extend_desc) + tx_q->dma_etx = addr; + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + tx_q->dma_entx = addr; + else + tx_q->dma_tx = addr; } return 0; err_dma: free_dma_tx_desc_resources(priv); - return ret; } @@ -1885,6 +1908,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) if (priv->extend_desc) p = (struct dma_desc *)(tx_q->dma_etx + entry); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + p = &tx_q->dma_entx[entry].basic; else p = tx_q->dma_tx + entry; @@ -1983,19 +2008,12 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) static void stmmac_tx_err(struct stmmac_priv *priv, u32 chan) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; - int i; netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, chan)); stmmac_stop_tx_dma(priv, chan); dma_free_tx_skbufs(priv, chan); - for (i = 0; i < DMA_TX_SIZE; i++) - if (priv->extend_desc) - stmmac_init_tx_desc(priv, &tx_q->dma_etx[i].basic, - priv->mode, (i == DMA_TX_SIZE - 1)); - else - stmmac_init_tx_desc(priv, &tx_q->dma_tx[i], - priv->mode, (i == DMA_TX_SIZE - 1)); + stmmac_clear_tx_descriptors(priv, chan); tx_q->dirty_tx = 0; tx_q->cur_tx = 0; tx_q->mss = 0; @@ -2632,6 +2650,14 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp) if (priv->dma_cap.vlins) stmmac_enable_vlan(priv, priv->hw, STMMAC_VLAN_INSERT); + /* TBS */ + for (chan = 0; chan < tx_cnt; chan++) { + struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + int enable = tx_q->tbs & STMMAC_TBS_AVAIL; + + stmmac_enable_tbs(priv, priv->ioaddr, enable, chan); + } + /* Start the ball rolling... */ stmmac_start_all_dma(priv); @@ -2689,6 +2715,16 @@ static int stmmac_open(struct net_device *dev) priv->rx_copybreak = STMMAC_RX_COPYBREAK; + /* Earlier check for TBS */ + for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) { + struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; + int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en; + + tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0; + if (stmmac_enable_tbs(priv, priv->ioaddr, tbs_en, chan)) + tx_q->tbs &= ~STMMAC_TBS_AVAIL; + } + ret = alloc_dma_desc_resources(priv); if (ret < 0) { netdev_err(priv->dev, "%s: DMA descriptors allocation failed\n", @@ -2837,7 +2873,11 @@ static bool stmmac_vlan_insert(struct stmmac_priv *priv, struct sk_buff *skb, tag = skb_vlan_tag_get(skb); - p = tx_q->dma_tx + tx_q->cur_tx; + if (tx_q->tbs & STMMAC_TBS_AVAIL) + p = &tx_q->dma_entx[tx_q->cur_tx].basic; + else + p = &tx_q->dma_tx[tx_q->cur_tx]; + if (stmmac_set_desc_vlan_tag(priv, p, tag, inner_tag, inner_type)) return false; @@ -2872,7 +2912,11 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, DMA_TX_SIZE); WARN_ON(tx_q->tx_skbuff[tx_q->cur_tx]); - desc = tx_q->dma_tx + tx_q->cur_tx; + + if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[tx_q->cur_tx].basic; + else + desc = &tx_q->dma_tx[tx_q->cur_tx]; curr_addr = des + (total_len - tmp_len); if (priv->dma_cap.addr64 <= 32) @@ -2923,13 +2967,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) { struct dma_desc *desc, *first, *mss_desc = NULL; struct stmmac_priv *priv = netdev_priv(dev); + int desc_size, tmp_pay_len = 0, first_tx; int nfrags = skb_shinfo(skb)->nr_frags; u32 queue = skb_get_queue_mapping(skb); unsigned int first_entry, tx_packets; - int tmp_pay_len = 0, first_tx; struct stmmac_tx_queue *tx_q; - u8 proto_hdr_len, hdr; bool has_vlan, set_ic; + u8 proto_hdr_len, hdr; u32 pay_len, mss; dma_addr_t des; int i; @@ -2966,7 +3010,11 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) /* set new MSS value if needed */ if (mss != tx_q->mss) { - mss_desc = tx_q->dma_tx + tx_q->cur_tx; + if (tx_q->tbs & STMMAC_TBS_AVAIL) + mss_desc = &tx_q->dma_entx[tx_q->cur_tx].basic; + else + mss_desc = &tx_q->dma_tx[tx_q->cur_tx]; + stmmac_set_mss(priv, mss_desc, mss); tx_q->mss = mss; tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, DMA_TX_SIZE); @@ -2986,7 +3034,10 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) first_entry = tx_q->cur_tx; WARN_ON(tx_q->tx_skbuff[first_entry]); - desc = tx_q->dma_tx + first_entry; + if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[first_entry].basic; + else + desc = &tx_q->dma_tx[first_entry]; first = desc; if (has_vlan) @@ -3058,7 +3109,11 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) set_ic = false; if (set_ic) { - desc = &tx_q->dma_tx[tx_q->cur_tx]; + if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[tx_q->cur_tx].basic; + else + desc = &tx_q->dma_tx[tx_q->cur_tx]; + tx_q->tx_count_frames = 0; stmmac_set_tx_ic(priv, desc); priv->xstats.tx_set_ic_bit++; @@ -3121,16 +3176,18 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n", __func__, tx_q->cur_tx, tx_q->dirty_tx, first_entry, tx_q->cur_tx, first, nfrags); - - stmmac_display_ring(priv, (void *)tx_q->dma_tx, DMA_TX_SIZE, 0); - pr_info(">>> frame to be transmitted: "); print_pkt(skb->data, skb_headlen(skb)); } netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len); - tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc)); + if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc_size = sizeof(struct dma_edesc); + else + desc_size = sizeof(struct dma_desc); + + tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * desc_size); stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); stmmac_tx_timer_arm(priv, queue); @@ -3160,10 +3217,11 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) u32 queue = skb_get_queue_mapping(skb); int nfrags = skb_shinfo(skb)->nr_frags; int gso = skb_shinfo(skb)->gso_type; + struct dma_edesc *tbs_desc = NULL; + int entry, desc_size, first_tx; struct dma_desc *desc, *first; struct stmmac_tx_queue *tx_q; bool has_vlan, set_ic; - int entry, first_tx; dma_addr_t des; tx_q = &priv->tx_queue[queue]; @@ -3203,6 +3261,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) if (likely(priv->extend_desc)) desc = (struct dma_desc *)(tx_q->dma_etx + entry); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[entry].basic; else desc = tx_q->dma_tx + entry; @@ -3232,6 +3292,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) if (likely(priv->extend_desc)) desc = (struct dma_desc *)(tx_q->dma_etx + entry); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[entry].basic; else desc = tx_q->dma_tx + entry; @@ -3278,6 +3340,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) if (set_ic) { if (likely(priv->extend_desc)) desc = &tx_q->dma_etx[entry].basic; + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc = &tx_q->dma_entx[entry].basic; else desc = &tx_q->dma_tx[entry]; @@ -3295,20 +3359,11 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->cur_tx = entry; if (netif_msg_pktdata(priv)) { - void *tx_head; - netdev_dbg(priv->dev, "%s: curr=%d dirty=%d f=%d, e=%d, first=%p, nfrags=%d", __func__, tx_q->cur_tx, tx_q->dirty_tx, first_entry, entry, first, nfrags); - if (priv->extend_desc) - tx_head = (void *)tx_q->dma_etx; - else - tx_head = (void *)tx_q->dma_tx; - - stmmac_display_ring(priv, tx_head, DMA_TX_SIZE, false); - netdev_dbg(priv->dev, ">>> frame to be transmitted: "); print_pkt(skb->data, skb->len); } @@ -3354,12 +3409,19 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) /* Prepare the first descriptor setting the OWN bit too */ stmmac_prepare_tx_desc(priv, first, 1, nopaged_len, - csum_insertion, priv->mode, 1, last_segment, + csum_insertion, priv->mode, 0, last_segment, skb->len); - } else { - stmmac_set_tx_owner(priv, first); } + if (tx_q->tbs & STMMAC_TBS_EN) { + struct timespec64 ts = ns_to_timespec64(skb->tstamp); + + tbs_desc = &tx_q->dma_entx[first_entry]; + stmmac_set_desc_tbs(priv, tbs_desc, ts.tv_sec, ts.tv_nsec); + } + + stmmac_set_tx_owner(priv, first); + /* The own bit must be the latest setting done when prepare the * descriptor and then barrier is needed to make sure that * all is coherent before granting the DMA engine. @@ -3370,7 +3432,14 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) stmmac_enable_dma_transmission(priv, priv->ioaddr); - tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc)); + if (likely(priv->extend_desc)) + desc_size = sizeof(struct dma_extended_desc); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc_size = sizeof(struct dma_edesc); + else + desc_size = sizeof(struct dma_desc); + + tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * desc_size); stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); stmmac_tx_timer_arm(priv, queue); @@ -4193,7 +4262,7 @@ static int stmmac_rings_status_show(struct seq_file *seq, void *v) seq_printf(seq, "Extended descriptor ring:\n"); sysfs_display_ring((void *)tx_q->dma_etx, DMA_TX_SIZE, 1, seq); - } else { + } else if (!(tx_q->tbs & STMMAC_TBS_AVAIL)) { seq_printf(seq, "Descriptor ring:\n"); sysfs_display_ring((void *)tx_q->dma_tx, DMA_TX_SIZE, 0, seq); diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h index 0531afa9b21e..19190c609282 100644 --- a/include/linux/stmmac.h +++ b/include/linux/stmmac.h @@ -139,6 +139,7 @@ struct stmmac_txq_cfg { u32 low_credit; bool use_prio; u32 prio; + int tbs_en; }; struct plat_stmmacenet_data { -- cgit v1.2.3-71-gd317 From e27f178793de16ca1b421f2c3f4bc3497b2ce723 Mon Sep 17 00:00:00 2001 From: Florian Fainelli Date: Sun, 12 Jan 2020 09:35:38 -0800 Subject: net: phy: Added IRQ print to phylink_bringup_phy() The information about the PHY attached to the PHYLINK instance is useful but is missing the IRQ prints that phy_attached_info() adds. phy_attached_info() is a bit long and it would not be possible to use phylink_info() anyway. Signed-off-by: Florian Fainelli Signed-off-by: David S. Miller --- drivers/net/phy/phy_device.c | 12 ++++++++++-- drivers/net/phy/phylink.c | 7 +++++-- include/linux/phy.h | 2 ++ 3 files changed, 17 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c index e5dc9f87f495..6a5056e0ae77 100644 --- a/drivers/net/phy/phy_device.c +++ b/drivers/net/phy/phy_device.c @@ -1107,9 +1107,8 @@ void phy_attached_info(struct phy_device *phydev) EXPORT_SYMBOL(phy_attached_info); #define ATTACHED_FMT "attached PHY driver [%s] (mii_bus:phy_addr=%s, irq=%s)" -void phy_attached_print(struct phy_device *phydev, const char *fmt, ...) +char *phy_attached_info_irq(struct phy_device *phydev) { - const char *drv_name = phydev->drv ? phydev->drv->name : "unbound"; char *irq_str; char irq_num[8]; @@ -1126,6 +1125,14 @@ void phy_attached_print(struct phy_device *phydev, const char *fmt, ...) break; } + return kasprintf(GFP_KERNEL, "%s", irq_str); +} +EXPORT_SYMBOL(phy_attached_info_irq); + +void phy_attached_print(struct phy_device *phydev, const char *fmt, ...) +{ + const char *drv_name = phydev->drv ? phydev->drv->name : "unbound"; + char *irq_str = phy_attached_info_irq(phydev); if (!fmt) { phydev_info(phydev, ATTACHED_FMT "\n", @@ -1142,6 +1149,7 @@ void phy_attached_print(struct phy_device *phydev, const char *fmt, ...) vprintk(fmt, ap); va_end(ap); } + kfree(irq_str); } EXPORT_SYMBOL(phy_attached_print); diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 8d786c99f97a..efabbfa4a6d3 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -726,6 +726,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy, { struct phylink_link_state config; __ETHTOOL_DECLARE_LINK_MODE_MASK(supported); + char *irq_str; int ret; /* @@ -761,9 +762,11 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy, phy->phylink = pl; phy->phy_link_change = phylink_phy_change; + irq_str = phy_attached_info_irq(phy); phylink_info(pl, - "PHY [%s] driver [%s]\n", dev_name(&phy->mdio.dev), - phy->drv->name); + "PHY [%s] driver [%s] (irq=%s)\n", + dev_name(&phy->mdio.dev), phy->drv->name, irq_str); + kfree(irq_str); mutex_lock(&phy->lock); mutex_lock(&pl->state_mutex); diff --git a/include/linux/phy.h b/include/linux/phy.h index 5932bb8e9c35..3a70b756ac1a 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1131,6 +1131,8 @@ static inline void phy_unlock_mdio_bus(struct phy_device *phydev) void phy_attached_print(struct phy_device *phydev, const char *fmt, ...) __printf(2, 3); +char *phy_attached_info_irq(struct phy_device *phydev) + __malloc; void phy_attached_info(struct phy_device *phydev); /* Clause 22 PHY */ -- cgit v1.2.3-71-gd317 From c0e4eadfb8daf2e9557c7450f9b237c08b404419 Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:39 +0100 Subject: net: macsec: move some definitions in a dedicated header This patch moves some structure, type and identifier definitions into a MACsec specific header. This patch does not modify how the MACsec code is running and only move things around. This is a preparation for the future MACsec hardware offloading support, which will re-use those definitions outside macsec.c. Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- drivers/net/macsec.c | 164 +---------------------------------------------- include/net/macsec.h | 177 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 178 insertions(+), 163 deletions(-) create mode 100644 include/net/macsec.h (limited to 'include') diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index afd8b2a08245..a336eee018f0 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -16,11 +16,10 @@ #include #include #include +#include #include -typedef u64 __bitwise sci_t; - #define MACSEC_SCI_LEN 8 /* SecTAG length = macsec_eth_header without the optional SCI */ @@ -58,8 +57,6 @@ struct macsec_eth_header { #define GCM_AES_IV_LEN 12 #define DEFAULT_ICV_LEN 16 -#define MACSEC_NUM_AN 4 /* 2 bits for the association number */ - #define for_each_rxsc(secy, sc) \ for (sc = rcu_dereference_bh(secy->rx_sc); \ sc; \ @@ -77,49 +74,6 @@ struct gcm_iv { __be32 pn; }; -/** - * struct macsec_key - SA key - * @id: user-provided key identifier - * @tfm: crypto struct, key storage - */ -struct macsec_key { - u8 id[MACSEC_KEYID_LEN]; - struct crypto_aead *tfm; -}; - -struct macsec_rx_sc_stats { - __u64 InOctetsValidated; - __u64 InOctetsDecrypted; - __u64 InPktsUnchecked; - __u64 InPktsDelayed; - __u64 InPktsOK; - __u64 InPktsInvalid; - __u64 InPktsLate; - __u64 InPktsNotValid; - __u64 InPktsNotUsingSA; - __u64 InPktsUnusedSA; -}; - -struct macsec_rx_sa_stats { - __u32 InPktsOK; - __u32 InPktsInvalid; - __u32 InPktsNotValid; - __u32 InPktsNotUsingSA; - __u32 InPktsUnusedSA; -}; - -struct macsec_tx_sa_stats { - __u32 OutPktsProtected; - __u32 OutPktsEncrypted; -}; - -struct macsec_tx_sc_stats { - __u64 OutPktsProtected; - __u64 OutPktsEncrypted; - __u64 OutOctetsProtected; - __u64 OutOctetsEncrypted; -}; - struct macsec_dev_stats { __u64 OutPktsUntagged; __u64 InPktsUntagged; @@ -131,124 +85,8 @@ struct macsec_dev_stats { __u64 InPktsOverrun; }; -/** - * struct macsec_rx_sa - receive secure association - * @active: - * @next_pn: packet number expected for the next packet - * @lock: protects next_pn manipulations - * @key: key structure - * @stats: per-SA stats - */ -struct macsec_rx_sa { - struct macsec_key key; - spinlock_t lock; - u32 next_pn; - refcount_t refcnt; - bool active; - struct macsec_rx_sa_stats __percpu *stats; - struct macsec_rx_sc *sc; - struct rcu_head rcu; -}; - -struct pcpu_rx_sc_stats { - struct macsec_rx_sc_stats stats; - struct u64_stats_sync syncp; -}; - -/** - * struct macsec_rx_sc - receive secure channel - * @sci: secure channel identifier for this SC - * @active: channel is active - * @sa: array of secure associations - * @stats: per-SC stats - */ -struct macsec_rx_sc { - struct macsec_rx_sc __rcu *next; - sci_t sci; - bool active; - struct macsec_rx_sa __rcu *sa[MACSEC_NUM_AN]; - struct pcpu_rx_sc_stats __percpu *stats; - refcount_t refcnt; - struct rcu_head rcu_head; -}; - -/** - * struct macsec_tx_sa - transmit secure association - * @active: - * @next_pn: packet number to use for the next packet - * @lock: protects next_pn manipulations - * @key: key structure - * @stats: per-SA stats - */ -struct macsec_tx_sa { - struct macsec_key key; - spinlock_t lock; - u32 next_pn; - refcount_t refcnt; - bool active; - struct macsec_tx_sa_stats __percpu *stats; - struct rcu_head rcu; -}; - -struct pcpu_tx_sc_stats { - struct macsec_tx_sc_stats stats; - struct u64_stats_sync syncp; -}; - -/** - * struct macsec_tx_sc - transmit secure channel - * @active: - * @encoding_sa: association number of the SA currently in use - * @encrypt: encrypt packets on transmit, or authenticate only - * @send_sci: always include the SCI in the SecTAG - * @end_station: - * @scb: single copy broadcast flag - * @sa: array of secure associations - * @stats: stats for this TXSC - */ -struct macsec_tx_sc { - bool active; - u8 encoding_sa; - bool encrypt; - bool send_sci; - bool end_station; - bool scb; - struct macsec_tx_sa __rcu *sa[MACSEC_NUM_AN]; - struct pcpu_tx_sc_stats __percpu *stats; -}; - #define MACSEC_VALIDATE_DEFAULT MACSEC_VALIDATE_STRICT -/** - * struct macsec_secy - MACsec Security Entity - * @netdev: netdevice for this SecY - * @n_rx_sc: number of receive secure channels configured on this SecY - * @sci: secure channel identifier used for tx - * @key_len: length of keys used by the cipher suite - * @icv_len: length of ICV used by the cipher suite - * @validate_frames: validation mode - * @operational: MAC_Operational flag - * @protect_frames: enable protection for this SecY - * @replay_protect: enable packet number checks on receive - * @replay_window: size of the replay window - * @tx_sc: transmit secure channel - * @rx_sc: linked list of receive secure channels - */ -struct macsec_secy { - struct net_device *netdev; - unsigned int n_rx_sc; - sci_t sci; - u16 key_len; - u16 icv_len; - enum macsec_validation_type validate_frames; - bool operational; - bool protect_frames; - bool replay_protect; - u32 replay_window; - struct macsec_tx_sc tx_sc; - struct macsec_rx_sc __rcu *rx_sc; -}; - struct pcpu_secy_stats { struct macsec_dev_stats stats; struct u64_stats_sync syncp; diff --git a/include/net/macsec.h b/include/net/macsec.h new file mode 100644 index 000000000000..e7b41c1043f6 --- /dev/null +++ b/include/net/macsec.h @@ -0,0 +1,177 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * MACsec netdev header, used for h/w accelerated implementations. + * + * Copyright (c) 2015 Sabrina Dubroca + */ +#ifndef _NET_MACSEC_H_ +#define _NET_MACSEC_H_ + +#include +#include +#include + +typedef u64 __bitwise sci_t; + +#define MACSEC_NUM_AN 4 /* 2 bits for the association number */ + +/** + * struct macsec_key - SA key + * @id: user-provided key identifier + * @tfm: crypto struct, key storage + */ +struct macsec_key { + u8 id[MACSEC_KEYID_LEN]; + struct crypto_aead *tfm; +}; + +struct macsec_rx_sc_stats { + __u64 InOctetsValidated; + __u64 InOctetsDecrypted; + __u64 InPktsUnchecked; + __u64 InPktsDelayed; + __u64 InPktsOK; + __u64 InPktsInvalid; + __u64 InPktsLate; + __u64 InPktsNotValid; + __u64 InPktsNotUsingSA; + __u64 InPktsUnusedSA; +}; + +struct macsec_rx_sa_stats { + __u32 InPktsOK; + __u32 InPktsInvalid; + __u32 InPktsNotValid; + __u32 InPktsNotUsingSA; + __u32 InPktsUnusedSA; +}; + +struct macsec_tx_sa_stats { + __u32 OutPktsProtected; + __u32 OutPktsEncrypted; +}; + +struct macsec_tx_sc_stats { + __u64 OutPktsProtected; + __u64 OutPktsEncrypted; + __u64 OutOctetsProtected; + __u64 OutOctetsEncrypted; +}; + +/** + * struct macsec_rx_sa - receive secure association + * @active: + * @next_pn: packet number expected for the next packet + * @lock: protects next_pn manipulations + * @key: key structure + * @stats: per-SA stats + */ +struct macsec_rx_sa { + struct macsec_key key; + spinlock_t lock; + u32 next_pn; + refcount_t refcnt; + bool active; + struct macsec_rx_sa_stats __percpu *stats; + struct macsec_rx_sc *sc; + struct rcu_head rcu; +}; + +struct pcpu_rx_sc_stats { + struct macsec_rx_sc_stats stats; + struct u64_stats_sync syncp; +}; + +struct pcpu_tx_sc_stats { + struct macsec_tx_sc_stats stats; + struct u64_stats_sync syncp; +}; + +/** + * struct macsec_rx_sc - receive secure channel + * @sci: secure channel identifier for this SC + * @active: channel is active + * @sa: array of secure associations + * @stats: per-SC stats + */ +struct macsec_rx_sc { + struct macsec_rx_sc __rcu *next; + sci_t sci; + bool active; + struct macsec_rx_sa __rcu *sa[MACSEC_NUM_AN]; + struct pcpu_rx_sc_stats __percpu *stats; + refcount_t refcnt; + struct rcu_head rcu_head; +}; + +/** + * struct macsec_tx_sa - transmit secure association + * @active: + * @next_pn: packet number to use for the next packet + * @lock: protects next_pn manipulations + * @key: key structure + * @stats: per-SA stats + */ +struct macsec_tx_sa { + struct macsec_key key; + spinlock_t lock; + u32 next_pn; + refcount_t refcnt; + bool active; + struct macsec_tx_sa_stats __percpu *stats; + struct rcu_head rcu; +}; + +/** + * struct macsec_tx_sc - transmit secure channel + * @active: + * @encoding_sa: association number of the SA currently in use + * @encrypt: encrypt packets on transmit, or authenticate only + * @send_sci: always include the SCI in the SecTAG + * @end_station: + * @scb: single copy broadcast flag + * @sa: array of secure associations + * @stats: stats for this TXSC + */ +struct macsec_tx_sc { + bool active; + u8 encoding_sa; + bool encrypt; + bool send_sci; + bool end_station; + bool scb; + struct macsec_tx_sa __rcu *sa[MACSEC_NUM_AN]; + struct pcpu_tx_sc_stats __percpu *stats; +}; + +/** + * struct macsec_secy - MACsec Security Entity + * @netdev: netdevice for this SecY + * @n_rx_sc: number of receive secure channels configured on this SecY + * @sci: secure channel identifier used for tx + * @key_len: length of keys used by the cipher suite + * @icv_len: length of ICV used by the cipher suite + * @validate_frames: validation mode + * @operational: MAC_Operational flag + * @protect_frames: enable protection for this SecY + * @replay_protect: enable packet number checks on receive + * @replay_window: size of the replay window + * @tx_sc: transmit secure channel + * @rx_sc: linked list of receive secure channels + */ +struct macsec_secy { + struct net_device *netdev; + unsigned int n_rx_sc; + sci_t sci; + u16 key_len; + u16 icv_len; + enum macsec_validation_type validate_frames; + bool operational; + bool protect_frames; + bool replay_protect; + u32 replay_window; + struct macsec_tx_sc tx_sc; + struct macsec_rx_sc __rcu *rx_sc; +}; + +#endif /* _NET_MACSEC_H_ */ -- cgit v1.2.3-71-gd317 From 76564261a7db80c5f5c624e0122a28787f266bdf Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:40 +0100 Subject: net: macsec: introduce the macsec_context structure This patch introduces the macsec_context structure. It will be used in the kernel to exchange information between the common MACsec implementation (macsec.c) and the MACsec hardware offloading implementations. This structure contains pointers to MACsec specific structures which contain the actual MACsec configuration, and to the underlying device (phydev for now). Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- include/linux/phy.h | 2 ++ include/net/macsec.h | 21 +++++++++++++++++++++ include/uapi/linux/if_link.h | 7 +++++++ tools/include/uapi/linux/if_link.h | 7 +++++++ 4 files changed, 37 insertions(+) (limited to 'include') diff --git a/include/linux/phy.h b/include/linux/phy.h index 3a70b756ac1a..be079a7bb40a 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -332,6 +332,8 @@ struct phy_c45_device_ids { u32 device_ids[8]; }; +struct macsec_context; + /* phy_device: An instance of a PHY * * drv: Pointer to the driver for this PHY instance diff --git a/include/net/macsec.h b/include/net/macsec.h index e7b41c1043f6..0b98803f92ec 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -174,4 +174,25 @@ struct macsec_secy { struct macsec_rx_sc __rcu *rx_sc; }; +/** + * struct macsec_context - MACsec context for hardware offloading + */ +struct macsec_context { + struct phy_device *phydev; + enum macsec_offload offload; + + struct macsec_secy *secy; + struct macsec_rx_sc *rx_sc; + struct { + unsigned char assoc_num; + u8 key[MACSEC_KEYID_LEN]; + union { + struct macsec_rx_sa *rx_sa; + struct macsec_tx_sa *tx_sa; + }; + } sa; + + u8 prepare:1; +}; + #endif /* _NET_MACSEC_H_ */ diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index 1d69f637c5d6..024af2d1d0af 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -486,6 +486,13 @@ enum macsec_validation_type { MACSEC_VALIDATE_MAX = __MACSEC_VALIDATE_END - 1, }; +enum macsec_offload { + MACSEC_OFFLOAD_OFF = 0, + MACSEC_OFFLOAD_PHY = 1, + __MACSEC_OFFLOAD_END, + MACSEC_OFFLOAD_MAX = __MACSEC_OFFLOAD_END - 1, +}; + /* IPVLAN section */ enum { IFLA_IPVLAN_UNSPEC, diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h index 8aec8769d944..42efdb84d189 100644 --- a/tools/include/uapi/linux/if_link.h +++ b/tools/include/uapi/linux/if_link.h @@ -485,6 +485,13 @@ enum macsec_validation_type { MACSEC_VALIDATE_MAX = __MACSEC_VALIDATE_END - 1, }; +enum macsec_offload { + MACSEC_OFFLOAD_OFF = 0, + MACSEC_OFFLOAD_PHY = 1, + __MACSEC_OFFLOAD_END, + MACSEC_OFFLOAD_MAX = __MACSEC_OFFLOAD_END - 1, +}; + /* IPVLAN section */ enum { IFLA_IPVLAN_UNSPEC, -- cgit v1.2.3-71-gd317 From 0830e20b62ad156f7df5ff5b9c4cea280ebe8fef Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:41 +0100 Subject: net: macsec: introduce MACsec ops This patch introduces MACsec ops for drivers to support offloading MACsec operations. Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- include/net/macsec.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) (limited to 'include') diff --git a/include/net/macsec.h b/include/net/macsec.h index 0b98803f92ec..16e7e5061178 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -195,4 +195,28 @@ struct macsec_context { u8 prepare:1; }; +/** + * struct macsec_ops - MACsec offloading operations + */ +struct macsec_ops { + /* Device wide */ + int (*mdo_dev_open)(struct macsec_context *ctx); + int (*mdo_dev_stop)(struct macsec_context *ctx); + /* SecY */ + int (*mdo_add_secy)(struct macsec_context *ctx); + int (*mdo_upd_secy)(struct macsec_context *ctx); + int (*mdo_del_secy)(struct macsec_context *ctx); + /* Security channels */ + int (*mdo_add_rxsc)(struct macsec_context *ctx); + int (*mdo_upd_rxsc)(struct macsec_context *ctx); + int (*mdo_del_rxsc)(struct macsec_context *ctx); + /* Security associations */ + int (*mdo_add_rxsa)(struct macsec_context *ctx); + int (*mdo_upd_rxsa)(struct macsec_context *ctx); + int (*mdo_del_rxsa)(struct macsec_context *ctx); + int (*mdo_add_txsa)(struct macsec_context *ctx); + int (*mdo_upd_txsa)(struct macsec_context *ctx); + int (*mdo_del_txsa)(struct macsec_context *ctx); +}; + #endif /* _NET_MACSEC_H_ */ -- cgit v1.2.3-71-gd317 From 2e18135845b359f26c37df38ba56565496517c10 Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:42 +0100 Subject: net: phy: add MACsec ops in phy_device This patch adds a reference to MACsec ops in the phy_device, to allow PHYs to support offloading MACsec operations. The phydev lock will be held while calling those helpers. Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- include/linux/phy.h | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'include') diff --git a/include/linux/phy.h b/include/linux/phy.h index be079a7bb40a..2929d0bc307f 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -333,6 +333,7 @@ struct phy_c45_device_ids { }; struct macsec_context; +struct macsec_ops; /* phy_device: An instance of a PHY * @@ -356,6 +357,7 @@ struct macsec_context; * attached_dev: The attached enet driver's device instance ptr * adjust_link: Callback for the enet controller to respond to * changes in the link state. + * macsec_ops: MACsec offloading ops. * * speed, duplex, pause, supported, advertising, lp_advertising, * and autoneg are used like in mii_if_info @@ -455,6 +457,11 @@ struct phy_device { void (*phy_link_change)(struct phy_device *, bool up, bool do_carrier); void (*adjust_link)(struct net_device *dev); + +#if IS_ENABLED(CONFIG_MACSEC) + /* MACsec management functions */ + const struct macsec_ops *macsec_ops; +#endif }; #define to_phy_device(d) container_of(to_mdio_device(d), \ struct phy_device, mdio) -- cgit v1.2.3-71-gd317 From dcb780fb279514f268826f2e9f4df3bc75610703 Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:44 +0100 Subject: net: macsec: add nla support for changing the offloading selection MACsec offloading to underlying hardware devices is disabled by default (the software implementation is used). This patch adds support for changing this setting through the MACsec netlink interface. Many checks are done when enabling offloading on a given MACsec interface as there are limitations (it must be supported by the hardware, only a single interface can be offloaded on a given physical device at a time, rules can't be moved for now). Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- drivers/net/macsec.c | 145 ++++++++++++++++++++++++++++++++++++++++- include/uapi/linux/if_macsec.h | 11 ++++ 2 files changed, 153 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 36b0416381bf..e515919e8687 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1484,6 +1484,7 @@ static const struct nla_policy macsec_genl_policy[NUM_MACSEC_ATTR] = { [MACSEC_ATTR_IFINDEX] = { .type = NLA_U32 }, [MACSEC_ATTR_RXSC_CONFIG] = { .type = NLA_NESTED }, [MACSEC_ATTR_SA_CONFIG] = { .type = NLA_NESTED }, + [MACSEC_ATTR_OFFLOAD] = { .type = NLA_NESTED }, }; static const struct nla_policy macsec_genl_rxsc_policy[NUM_MACSEC_RXSC_ATTR] = { @@ -1501,6 +1502,10 @@ static const struct nla_policy macsec_genl_sa_policy[NUM_MACSEC_SA_ATTR] = { .len = MACSEC_MAX_KEY_LEN, }, }; +static const struct nla_policy macsec_genl_offload_policy[NUM_MACSEC_OFFLOAD_ATTR] = { + [MACSEC_OFFLOAD_ATTR_TYPE] = { .type = NLA_U8 }, +}; + /* Offloads an operation to a device driver */ static int macsec_offload(int (* const func)(struct macsec_context *), struct macsec_context *ctx) @@ -2329,6 +2334,126 @@ cleanup: return ret; } +static bool macsec_is_configured(struct macsec_dev *macsec) +{ + struct macsec_secy *secy = &macsec->secy; + struct macsec_tx_sc *tx_sc = &secy->tx_sc; + int i; + + if (secy->n_rx_sc > 0) + return true; + + for (i = 0; i < MACSEC_NUM_AN; i++) + if (tx_sc->sa[i]) + return true; + + return false; +} + +static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) +{ + struct nlattr *tb_offload[MACSEC_OFFLOAD_ATTR_MAX + 1]; + enum macsec_offload offload, prev_offload; + int (*func)(struct macsec_context *ctx); + struct nlattr **attrs = info->attrs; + struct net_device *dev, *loop_dev; + const struct macsec_ops *ops; + struct macsec_context ctx; + struct macsec_dev *macsec; + struct net *loop_net; + int ret; + + if (!attrs[MACSEC_ATTR_IFINDEX]) + return -EINVAL; + + if (!attrs[MACSEC_ATTR_OFFLOAD]) + return -EINVAL; + + if (nla_parse_nested_deprecated(tb_offload, MACSEC_OFFLOAD_ATTR_MAX, + attrs[MACSEC_ATTR_OFFLOAD], + macsec_genl_offload_policy, NULL)) + return -EINVAL; + + dev = get_dev_from_nl(genl_info_net(info), attrs); + if (IS_ERR(dev)) + return PTR_ERR(dev); + macsec = macsec_priv(dev); + + offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]); + if (macsec->offload == offload) + return 0; + + /* Check if the offloading mode is supported by the underlying layers */ + if (offload != MACSEC_OFFLOAD_OFF && + !macsec_check_offload(offload, macsec)) + return -EOPNOTSUPP; + + if (offload == MACSEC_OFFLOAD_OFF) + goto skip_limitation; + + /* Check the physical interface isn't offloading another interface + * first. + */ + for_each_net(loop_net) { + for_each_netdev(loop_net, loop_dev) { + struct macsec_dev *priv; + + if (!netif_is_macsec(loop_dev)) + continue; + + priv = macsec_priv(loop_dev); + + if (priv->real_dev == macsec->real_dev && + priv->offload != MACSEC_OFFLOAD_OFF) + return -EBUSY; + } + } + +skip_limitation: + /* Check if the net device is busy. */ + if (netif_running(dev)) + return -EBUSY; + + rtnl_lock(); + + prev_offload = macsec->offload; + macsec->offload = offload; + + /* Check if the device already has rules configured: we do not support + * rules migration. + */ + if (macsec_is_configured(macsec)) { + ret = -EBUSY; + goto rollback; + } + + ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload, + macsec, &ctx); + if (!ops) { + ret = -EOPNOTSUPP; + goto rollback; + } + + if (prev_offload == MACSEC_OFFLOAD_OFF) + func = ops->mdo_add_secy; + else + func = ops->mdo_del_secy; + + ctx.secy = &macsec->secy; + ret = macsec_offload(func, &ctx); + if (ret) + goto rollback; + + rtnl_unlock(); + return 0; + +rollback: + macsec->offload = prev_offload; + + rtnl_unlock(); + return ret; +} + static int copy_tx_sa_stats(struct sk_buff *skb, struct macsec_tx_sa_stats __percpu *pstats) { @@ -2590,12 +2715,13 @@ static noinline_for_stack int dump_secy(struct macsec_secy *secy, struct net_device *dev, struct sk_buff *skb, struct netlink_callback *cb) { - struct macsec_rx_sc *rx_sc; + struct macsec_dev *macsec = netdev_priv(dev); struct macsec_tx_sc *tx_sc = &secy->tx_sc; struct nlattr *txsa_list, *rxsc_list; - int i, j; - void *hdr; + struct macsec_rx_sc *rx_sc; struct nlattr *attr; + void *hdr; + int i, j; hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, &macsec_fam, NLM_F_MULTI, MACSEC_CMD_GET_TXSC); @@ -2607,6 +2733,13 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, if (nla_put_u32(skb, MACSEC_ATTR_IFINDEX, dev->ifindex)) goto nla_put_failure; + attr = nla_nest_start_noflag(skb, MACSEC_ATTR_OFFLOAD); + if (!attr) + goto nla_put_failure; + if (nla_put_u8(skb, MACSEC_OFFLOAD_ATTR_TYPE, macsec->offload)) + goto nla_put_failure; + nla_nest_end(skb, attr); + if (nla_put_secy(secy, skb)) goto nla_put_failure; @@ -2872,6 +3005,12 @@ static const struct genl_ops macsec_genl_ops[] = { .doit = macsec_upd_rxsa, .flags = GENL_ADMIN_PERM, }, + { + .cmd = MACSEC_CMD_UPD_OFFLOAD, + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit = macsec_upd_offload, + .flags = GENL_ADMIN_PERM, + }, }; static struct genl_family macsec_fam __ro_after_init = { diff --git a/include/uapi/linux/if_macsec.h b/include/uapi/linux/if_macsec.h index 98e4d5d7c45c..1d63c43c38cc 100644 --- a/include/uapi/linux/if_macsec.h +++ b/include/uapi/linux/if_macsec.h @@ -45,6 +45,7 @@ enum macsec_attrs { MACSEC_ATTR_RXSC_LIST, /* dump, nested, macsec_rxsc_attrs for each RXSC */ MACSEC_ATTR_TXSC_STATS, /* dump, nested, macsec_txsc_stats_attr */ MACSEC_ATTR_SECY_STATS, /* dump, nested, macsec_secy_stats_attr */ + MACSEC_ATTR_OFFLOAD, /* config, nested, macsec_offload_attrs */ __MACSEC_ATTR_END, NUM_MACSEC_ATTR = __MACSEC_ATTR_END, MACSEC_ATTR_MAX = __MACSEC_ATTR_END - 1, @@ -97,6 +98,15 @@ enum macsec_sa_attrs { MACSEC_SA_ATTR_MAX = __MACSEC_SA_ATTR_END - 1, }; +enum macsec_offload_attrs { + MACSEC_OFFLOAD_ATTR_UNSPEC, + MACSEC_OFFLOAD_ATTR_TYPE, /* config/dump, u8 0..2 */ + MACSEC_OFFLOAD_ATTR_PAD, + __MACSEC_OFFLOAD_ATTR_END, + NUM_MACSEC_OFFLOAD_ATTR = __MACSEC_OFFLOAD_ATTR_END, + MACSEC_OFFLOAD_ATTR_MAX = __MACSEC_OFFLOAD_ATTR_END - 1, +}; + enum macsec_nl_commands { MACSEC_CMD_GET_TXSC, MACSEC_CMD_ADD_RXSC, @@ -108,6 +118,7 @@ enum macsec_nl_commands { MACSEC_CMD_ADD_RXSA, MACSEC_CMD_DEL_RXSA, MACSEC_CMD_UPD_RXSA, + MACSEC_CMD_UPD_OFFLOAD, }; /* u64 per-RXSC stats */ -- cgit v1.2.3-71-gd317 From 5c937de78b39e47ce9924fc4b863c5b727edc328 Mon Sep 17 00:00:00 2001 From: Antoine Tenart Date: Mon, 13 Jan 2020 23:31:47 +0100 Subject: net: macsec: PN wrap callback Allow to call macsec_pn_wrapped from hardware drivers to notify when a PN rolls over. Some drivers might used an interrupt to implement this. Signed-off-by: Antoine Tenart Signed-off-by: David S. Miller --- drivers/net/macsec.c | 25 +++++++++++++++++++------ include/net/macsec.h | 2 ++ 2 files changed, 21 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index e515919e8687..45bfd99f17fa 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -424,6 +424,23 @@ static struct macsec_eth_header *macsec_ethhdr(struct sk_buff *skb) return (struct macsec_eth_header *)skb_mac_header(skb); } +static void __macsec_pn_wrapped(struct macsec_secy *secy, + struct macsec_tx_sa *tx_sa) +{ + pr_debug("PN wrapped, transitioning to !oper\n"); + tx_sa->active = false; + if (secy->protect_frames) + secy->operational = false; +} + +void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa) +{ + spin_lock_bh(&tx_sa->lock); + __macsec_pn_wrapped(secy, tx_sa); + spin_unlock_bh(&tx_sa->lock); +} +EXPORT_SYMBOL_GPL(macsec_pn_wrapped); + static u32 tx_sa_update_pn(struct macsec_tx_sa *tx_sa, struct macsec_secy *secy) { u32 pn; @@ -432,12 +449,8 @@ static u32 tx_sa_update_pn(struct macsec_tx_sa *tx_sa, struct macsec_secy *secy) pn = tx_sa->next_pn; tx_sa->next_pn++; - if (tx_sa->next_pn == 0) { - pr_debug("PN wrapped, transitioning to !oper\n"); - tx_sa->active = false; - if (secy->protect_frames) - secy->operational = false; - } + if (tx_sa->next_pn == 0) + __macsec_pn_wrapped(secy, tx_sa); spin_unlock_bh(&tx_sa->lock); return pn; diff --git a/include/net/macsec.h b/include/net/macsec.h index 16e7e5061178..92e43db8b566 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -219,4 +219,6 @@ struct macsec_ops { int (*mdo_del_txsa)(struct macsec_context *ctx); }; +void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa); + #endif /* _NET_MACSEC_H_ */ -- cgit v1.2.3-71-gd317 From 5eee7bd7e245914e4e050c413dfe864e31805207 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 13 Jan 2020 18:42:26 -0500 Subject: net: skbuff: disambiguate argument and member for skb_list_walk_safe helper This worked before, because we made all callers name their next pointer "next". But in trying to be more "drop-in" ready, the silliness here is revealed. This commit fixes the problem by making the macro argument and the member use different names. Signed-off-by: Jason A. Donenfeld Signed-off-by: David S. Miller --- include/linux/skbuff.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 016b3c4ab99a..aaf73b34f72f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1479,9 +1479,9 @@ static inline void skb_mark_not_on_list(struct sk_buff *skb) } /* Iterate through singly-linked GSO fragments of an skb. */ -#define skb_list_walk_safe(first, skb, next) \ - for ((skb) = (first), (next) = (skb) ? (skb)->next : NULL; (skb); \ - (skb) = (next), (next) = (skb) ? (skb)->next : NULL) +#define skb_list_walk_safe(first, skb, next_skb) \ + for ((skb) = (first), (next_skb) = (skb) ? (skb)->next : NULL; (skb); \ + (skb) = (next_skb), (next_skb) = (skb) ? (skb)->next : NULL) static inline void skb_list_del_init(struct sk_buff *skb) { -- cgit v1.2.3-71-gd317 From 1e301fd04eaaa5b1e3c202450d86864e6714d783 Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Tue, 14 Jan 2020 13:23:10 +0200 Subject: ipv4: Encapsulate function arguments in a struct fib_dump_info() is used to prepare RTM_{NEW,DEL}ROUTE netlink messages using the passed arguments. Currently, the function takes 11 arguments, 6 of which are attributes of the route being dumped (e.g., prefix, TOS). The next patch will need the function to also dump to user space an indication if the route is present in hardware or not. Instead of passing yet another argument, change the function to take a struct containing the different route attributes. v2: * Name last argument of fib_dump_info() * Move 'struct fib_rt_info' to include/net/ip_fib.h so that it could later be passed to fib_alias_hw_flags_set() Signed-off-by: Ido Schimmel Reviewed-by: David Ahern Reviewed-by: Jiri Pirko Signed-off-by: David S. Miller --- include/net/ip_fib.h | 9 +++++++++ net/ipv4/fib_lookup.h | 5 ++--- net/ipv4/fib_semantics.c | 26 ++++++++++++++++---------- net/ipv4/fib_trie.c | 14 +++++++++----- net/ipv4/route.c | 12 +++++++++--- 5 files changed, 45 insertions(+), 21 deletions(-) (limited to 'include') diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h index b9cba41c6d4f..0c071c820e33 100644 --- a/include/net/ip_fib.h +++ b/include/net/ip_fib.h @@ -204,6 +204,15 @@ __be32 fib_result_prefsrc(struct net *net, struct fib_result *res); #define FIB_RES_DEV(res) (FIB_RES_NHC(res)->nhc_dev) #define FIB_RES_OIF(res) (FIB_RES_NHC(res)->nhc_oif) +struct fib_rt_info { + struct fib_info *fi; + u32 tb_id; + __be32 dst; + int dst_len; + u8 tos; + u8 type; +}; + struct fib_entry_notifier_info { struct fib_notifier_info info; /* must be first */ u32 dst; diff --git a/net/ipv4/fib_lookup.h b/net/ipv4/fib_lookup.h index a68b5e21ec51..a4b829358bfa 100644 --- a/net/ipv4/fib_lookup.h +++ b/net/ipv4/fib_lookup.h @@ -35,9 +35,8 @@ struct fib_info *fib_create_info(struct fib_config *cfg, int fib_nh_match(struct fib_config *cfg, struct fib_info *fi, struct netlink_ext_ack *extack); bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi); -int fib_dump_info(struct sk_buff *skb, u32 pid, u32 seq, int event, u32 tb_id, - u8 type, __be32 dst, int dst_len, u8 tos, struct fib_info *fi, - unsigned int); +int fib_dump_info(struct sk_buff *skb, u32 pid, u32 seq, int event, + struct fib_rt_info *fri, unsigned int flags); void rtmsg_fib(int event, __be32 key, struct fib_alias *fa, int dst_len, u32 tb_id, const struct nl_info *info, unsigned int nlm_flags); diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index f1888c683426..3ed1349be428 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -504,6 +504,7 @@ void rtmsg_fib(int event, __be32 key, struct fib_alias *fa, int dst_len, u32 tb_id, const struct nl_info *info, unsigned int nlm_flags) { + struct fib_rt_info fri; struct sk_buff *skb; u32 seq = info->nlh ? info->nlh->nlmsg_seq : 0; int err = -ENOBUFS; @@ -512,9 +513,13 @@ void rtmsg_fib(int event, __be32 key, struct fib_alias *fa, if (!skb) goto errout; - err = fib_dump_info(skb, info->portid, seq, event, tb_id, - fa->fa_type, key, dst_len, - fa->fa_tos, fa->fa_info, nlm_flags); + fri.fi = fa->fa_info; + fri.tb_id = tb_id; + fri.dst = key; + fri.dst_len = dst_len; + fri.tos = fa->fa_tos; + fri.type = fa->fa_type; + err = fib_dump_info(skb, info->portid, seq, event, &fri, nlm_flags); if (err < 0) { /* -EMSGSIZE implies BUG in fib_nlmsg_size() */ WARN_ON(err == -EMSGSIZE); @@ -1725,10 +1730,11 @@ static int fib_add_multipath(struct sk_buff *skb, struct fib_info *fi) #endif int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, - u32 tb_id, u8 type, __be32 dst, int dst_len, u8 tos, - struct fib_info *fi, unsigned int flags) + struct fib_rt_info *fri, unsigned int flags) { - unsigned int nhs = fib_info_num_path(fi); + unsigned int nhs = fib_info_num_path(fri->fi); + struct fib_info *fi = fri->fi; + u32 tb_id = fri->tb_id; struct nlmsghdr *nlh; struct rtmsg *rtm; @@ -1738,22 +1744,22 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, rtm = nlmsg_data(nlh); rtm->rtm_family = AF_INET; - rtm->rtm_dst_len = dst_len; + rtm->rtm_dst_len = fri->dst_len; rtm->rtm_src_len = 0; - rtm->rtm_tos = tos; + rtm->rtm_tos = fri->tos; if (tb_id < 256) rtm->rtm_table = tb_id; else rtm->rtm_table = RT_TABLE_COMPAT; if (nla_put_u32(skb, RTA_TABLE, tb_id)) goto nla_put_failure; - rtm->rtm_type = type; + rtm->rtm_type = fri->type; rtm->rtm_flags = fi->fib_flags; rtm->rtm_scope = fi->fib_scope; rtm->rtm_protocol = fi->fib_protocol; if (rtm->rtm_dst_len && - nla_put_in_addr(skb, RTA_DST, dst)) + nla_put_in_addr(skb, RTA_DST, fri->dst)) goto nla_put_failure; if (fi->fib_priority && nla_put_u32(skb, RTA_PRIORITY, fi->fib_priority)) diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 39f56d68ec19..75af3f8ae50e 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -2194,14 +2194,18 @@ static int fn_trie_dump_leaf(struct key_vector *l, struct fib_table *tb, if (filter->dump_routes) { if (!s_fa) { + struct fib_rt_info fri; + + fri.fi = fi; + fri.tb_id = tb->tb_id; + fri.dst = xkey; + fri.dst_len = KEYLENGTH - fa->fa_slen; + fri.tos = fa->fa_tos; + fri.type = fa->fa_type; err = fib_dump_info(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, - RTM_NEWROUTE, - tb->tb_id, fa->fa_type, - xkey, - KEYLENGTH - fa->fa_slen, - fa->fa_tos, fi, flags); + RTM_NEWROUTE, &fri, flags); if (err < 0) goto stop; } diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 87e979f2b74a..167a7357d12a 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -3223,16 +3223,22 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, skb_reset_mac_header(skb); if (rtm->rtm_flags & RTM_F_FIB_MATCH) { + struct fib_rt_info fri; + if (!res.fi) { err = fib_props[res.type].error; if (!err) err = -EHOSTUNREACH; goto errout_rcu; } + fri.fi = res.fi; + fri.tb_id = table_id; + fri.dst = res.prefix; + fri.dst_len = res.prefixlen; + fri.tos = fl4.flowi4_tos; + fri.type = rt->rt_type; err = fib_dump_info(skb, NETLINK_CB(in_skb).portid, - nlh->nlmsg_seq, RTM_NEWROUTE, table_id, - rt->rt_type, res.prefix, res.prefixlen, - fl4.flowi4_tos, res.fi, 0); + nlh->nlmsg_seq, RTM_NEWROUTE, &fri, 0); } else { err = rt_fill_info(net, dst, src, rt, table_id, &fl4, skb, NETLINK_CB(in_skb).portid, -- cgit v1.2.3-71-gd317 From 90b93f1b31f86dcde2fa3c57f1ae33d28d87a1f8 Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Tue, 14 Jan 2020 13:23:11 +0200 Subject: ipv4: Add "offload" and "trap" indications to routes When performing L3 offload, routes and nexthops are usually programmed into two different tables in the underlying device. Therefore, the fact that a nexthop resides in hardware does not necessarily mean that all the associated routes also reside in hardware and vice-versa. While the kernel can signal to user space the presence of a nexthop in hardware (via 'RTNH_F_OFFLOAD'), it does not have a corresponding flag for routes. In addition, the fact that a route resides in hardware does not necessarily mean that the traffic is offloaded. For example, unreachable routes (i.e., 'RTN_UNREACHABLE') are programmed to trap packets to the CPU so that the kernel will be able to generate the appropriate ICMP error packet. This patch adds an "offload" and "trap" indications to IPv4 routes, so that users will have better visibility into the offload process. 'struct fib_alias' is extended with two new fields that indicate if the route resides in hardware or not and if it is offloading traffic from the kernel or trapping packets to it. Note that the new fields are added in the 6 bytes hole and therefore the struct still fits in a single cache line [1]. Capable drivers are expected to invoke fib_alias_hw_flags_set() with the route's key in order to set the flags. The indications are dumped to user space via a new flags (i.e., 'RTM_F_OFFLOAD' and 'RTM_F_TRAP') in the 'rtm_flags' field in the ancillary header. v2: * Make use of 'struct fib_rt_info' in fib_alias_hw_flags_set() [1] struct fib_alias { struct hlist_node fa_list; /* 0 16 */ struct fib_info * fa_info; /* 16 8 */ u8 fa_tos; /* 24 1 */ u8 fa_type; /* 25 1 */ u8 fa_state; /* 26 1 */ u8 fa_slen; /* 27 1 */ u32 tb_id; /* 28 4 */ s16 fa_default; /* 32 2 */ u8 offload:1; /* 34: 0 1 */ u8 trap:1; /* 34: 1 1 */ u8 unused:6; /* 34: 2 1 */ /* XXX 5 bytes hole, try to pack */ struct callback_head rcu __attribute__((__aligned__(8))); /* 40 16 */ /* size: 56, cachelines: 1, members: 12 */ /* sum members: 50, holes: 1, sum holes: 5 */ /* sum bitfield members: 8 bits (1 bytes) */ /* forced alignments: 1, forced holes: 1, sum forced holes: 5 */ /* last cacheline: 56 bytes */ } __attribute__((__aligned__(8))); Signed-off-by: Ido Schimmel Reviewed-by: David Ahern Reviewed-by: Jiri Pirko Signed-off-by: David S. Miller --- include/net/ip_fib.h | 4 ++++ include/uapi/linux/rtnetlink.h | 2 ++ net/ipv4/fib_lookup.h | 3 +++ net/ipv4/fib_semantics.c | 7 ++++++ net/ipv4/fib_trie.c | 52 ++++++++++++++++++++++++++++++++++++++++++ net/ipv4/route.c | 19 +++++++++++++++ 6 files changed, 87 insertions(+) (limited to 'include') diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h index 0c071c820e33..6a1ae49809de 100644 --- a/include/net/ip_fib.h +++ b/include/net/ip_fib.h @@ -211,6 +211,9 @@ struct fib_rt_info { int dst_len; u8 tos; u8 type; + u8 offload:1, + trap:1, + unused:6; }; struct fib_entry_notifier_info { @@ -473,6 +476,7 @@ int fib_nh_common_init(struct fib_nh_common *nhc, struct nlattr *fc_encap, void fib_nh_common_release(struct fib_nh_common *nhc); /* Exported by fib_trie.c */ +void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri); void fib_trie_init(void); struct fib_table *fib_trie_table(u32 id, struct fib_table *alias); diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index 1418a8362bb7..cd43321d20dd 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -309,6 +309,8 @@ enum rt_scope_t { #define RTM_F_PREFIX 0x800 /* Prefix addresses */ #define RTM_F_LOOKUP_TABLE 0x1000 /* set rtm_table to FIB lookup result */ #define RTM_F_FIB_MATCH 0x2000 /* return full fib lookup match */ +#define RTM_F_OFFLOAD 0x4000 /* route is offloaded */ +#define RTM_F_TRAP 0x8000 /* route is trapping packets */ /* Reserved table identifiers */ diff --git a/net/ipv4/fib_lookup.h b/net/ipv4/fib_lookup.h index a4b829358bfa..c092e9a55790 100644 --- a/net/ipv4/fib_lookup.h +++ b/net/ipv4/fib_lookup.h @@ -16,6 +16,9 @@ struct fib_alias { u8 fa_slen; u32 tb_id; s16 fa_default; + u8 offload:1, + trap:1, + unused:6; struct rcu_head rcu; }; diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index 3ed1349be428..a803cdd9400a 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -519,6 +519,8 @@ void rtmsg_fib(int event, __be32 key, struct fib_alias *fa, fri.dst_len = dst_len; fri.tos = fa->fa_tos; fri.type = fa->fa_type; + fri.offload = fa->offload; + fri.trap = fa->trap; err = fib_dump_info(skb, info->portid, seq, event, &fri, nlm_flags); if (err < 0) { /* -EMSGSIZE implies BUG in fib_nlmsg_size() */ @@ -1801,6 +1803,11 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, goto nla_put_failure; } + if (fri->offload) + rtm->rtm_flags |= RTM_F_OFFLOAD; + if (fri->trap) + rtm->rtm_flags |= RTM_F_TRAP; + nlmsg_end(skb, nlh); return 0; diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 75af3f8ae50e..6ce1f2bbffd0 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -1012,6 +1012,52 @@ static struct fib_alias *fib_find_alias(struct hlist_head *fah, u8 slen, return NULL; } +static struct fib_alias * +fib_find_matching_alias(struct net *net, const struct fib_rt_info *fri) +{ + u8 slen = KEYLENGTH - fri->dst_len; + struct key_vector *l, *tp; + struct fib_table *tb; + struct fib_alias *fa; + struct trie *t; + + tb = fib_get_table(net, fri->tb_id); + if (!tb) + return NULL; + + t = (struct trie *)tb->tb_data; + l = fib_find_node(t, &tp, be32_to_cpu(fri->dst)); + if (!l) + return NULL; + + hlist_for_each_entry_rcu(fa, &l->leaf, fa_list) { + if (fa->fa_slen == slen && fa->tb_id == fri->tb_id && + fa->fa_tos == fri->tos && fa->fa_info == fri->fi && + fa->fa_type == fri->type) + return fa; + } + + return NULL; +} + +void fib_alias_hw_flags_set(struct net *net, const struct fib_rt_info *fri) +{ + struct fib_alias *fa_match; + + rcu_read_lock(); + + fa_match = fib_find_matching_alias(net, fri); + if (!fa_match) + goto out; + + fa_match->offload = fri->offload; + fa_match->trap = fri->trap; + +out: + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(fib_alias_hw_flags_set); + static void trie_rebalance(struct trie *t, struct key_vector *tn) { while (!IS_TRIE(tn)) @@ -1220,6 +1266,8 @@ int fib_table_insert(struct net *net, struct fib_table *tb, new_fa->fa_slen = fa->fa_slen; new_fa->tb_id = tb->tb_id; new_fa->fa_default = -1; + new_fa->offload = 0; + new_fa->trap = 0; hlist_replace_rcu(&fa->fa_list, &new_fa->fa_list); @@ -1278,6 +1326,8 @@ int fib_table_insert(struct net *net, struct fib_table *tb, new_fa->fa_slen = slen; new_fa->tb_id = tb->tb_id; new_fa->fa_default = -1; + new_fa->offload = 0; + new_fa->trap = 0; /* Insert new entry to the list. */ err = fib_insert_alias(t, tp, l, new_fa, fa, key); @@ -2202,6 +2252,8 @@ static int fn_trie_dump_leaf(struct key_vector *l, struct fib_table *tb, fri.dst_len = KEYLENGTH - fa->fa_slen; fri.tos = fa->fa_tos; fri.type = fa->fa_type; + fri.offload = fa->offload; + fri.trap = fa->trap; err = fib_dump_info(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 167a7357d12a..2010888e68ca 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -3237,6 +3237,25 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh, fri.dst_len = res.prefixlen; fri.tos = fl4.flowi4_tos; fri.type = rt->rt_type; + fri.offload = 0; + fri.trap = 0; + if (res.fa_head) { + struct fib_alias *fa; + + hlist_for_each_entry_rcu(fa, res.fa_head, fa_list) { + u8 slen = 32 - fri.dst_len; + + if (fa->fa_slen == slen && + fa->tb_id == fri.tb_id && + fa->fa_tos == fri.tos && + fa->fa_info == res.fi && + fa->fa_type == fri.type) { + fri.offload = fa->offload; + fri.trap = fa->trap; + break; + } + } + } err = fib_dump_info(skb, NETLINK_CB(in_skb).portid, nlh->nlmsg_seq, RTM_NEWROUTE, &fri, 0); } else { -- cgit v1.2.3-71-gd317 From bb3c4ab93e44784c1e958bdbba7824bba40f23cd Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Tue, 14 Jan 2020 13:23:12 +0200 Subject: ipv6: Add "offload" and "trap" indications to routes In a similar fashion to previous patch, add "offload" and "trap" indication to IPv6 routes. This is done by using two unused bits in 'struct fib6_info' to hold these indications. Capable drivers are expected to set these when processing the various in-kernel route notifications. Signed-off-by: Ido Schimmel Reviewed-by: Jiri Pirko Reviewed-by: David Ahern Acked-by: Roopa Prabhu Signed-off-by: David S. Miller --- include/net/ip6_fib.h | 11 ++++++++++- net/ipv6/route.c | 7 +++++++ 2 files changed, 17 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h index b579faea41e9..fd60a8ac02ee 100644 --- a/include/net/ip6_fib.h +++ b/include/net/ip6_fib.h @@ -192,7 +192,9 @@ struct fib6_info { dst_nopolicy:1, dst_host:1, fib6_destroying:1, - unused:3; + offload:1, + trap:1, + unused:1; struct rcu_head rcu; struct nexthop *nh; @@ -329,6 +331,13 @@ static inline void fib6_info_release(struct fib6_info *f6i) call_rcu(&f6i->rcu, fib6_info_destroy_rcu); } +static inline void fib6_info_hw_flags_set(struct fib6_info *f6i, bool offload, + bool trap) +{ + f6i->offload = offload; + f6i->trap = trap; +} + enum fib6_walk_state { #ifdef CONFIG_IPV6_SUBTREES FWS_S, diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 0253b702afb7..4fbdc60b4e07 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -5576,6 +5576,13 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb, expires -= jiffies; } + if (!dst) { + if (rt->offload) + rtm->rtm_flags |= RTM_F_OFFLOAD; + if (rt->trap) + rtm->rtm_flags |= RTM_F_TRAP; + } + if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0) goto nla_put_failure; -- cgit v1.2.3-71-gd317 From 8dcea187088bce5d2f1149294ad109f022653547 Mon Sep 17 00:00:00 2001 From: Nikolay Aleksandrov Date: Tue, 14 Jan 2020 19:56:09 +0200 Subject: net: bridge: vlan: add rtm definitions and dump support This patch adds vlan rtm definitions: - NEWVLAN: to be used for creating vlans, setting options and notifications - DELVLAN: to be used for deleting vlans - GETVLAN: used for dumping vlan information Dumping vlans which can span multiple messages is added now with basic information (vid and flags). We use nlmsg_parse() to validate the header length in order to be able to extend the message with filtering attributes later. Signed-off-by: Nikolay Aleksandrov Signed-off-by: David S. Miller --- include/uapi/linux/if_bridge.h | 28 ++++++++ include/uapi/linux/rtnetlink.h | 7 ++ net/bridge/br_netlink.c | 2 + net/bridge/br_private.h | 14 ++++ net/bridge/br_vlan.c | 148 +++++++++++++++++++++++++++++++++++++++++ security/selinux/nlmsgtab.c | 5 +- 6 files changed, 203 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 4a58e3d7de46..4da04f77d9ee 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -165,6 +165,34 @@ struct bridge_stp_xstats { __u64 tx_tcn; }; +/* Bridge vlan RTM header */ +struct br_vlan_msg { + __u8 family; + __u8 reserved1; + __u16 reserved2; + __u32 ifindex; +}; + +/* Bridge vlan RTM attributes + * [BRIDGE_VLANDB_ENTRY] = { + * [BRIDGE_VLANDB_ENTRY_INFO] + * ... + * } + */ +enum { + BRIDGE_VLANDB_UNSPEC, + BRIDGE_VLANDB_ENTRY, + __BRIDGE_VLANDB_MAX, +}; +#define BRIDGE_VLANDB_MAX (__BRIDGE_VLANDB_MAX - 1) + +enum { + BRIDGE_VLANDB_ENTRY_UNSPEC, + BRIDGE_VLANDB_ENTRY_INFO, + __BRIDGE_VLANDB_ENTRY_MAX, +}; +#define BRIDGE_VLANDB_ENTRY_MAX (__BRIDGE_VLANDB_ENTRY_MAX - 1) + /* Bridge multicast database attributes * [MDBA_MDB] = { * [MDBA_MDB_ENTRY] = { diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index cd43321d20dd..b333cff14071 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -171,6 +171,13 @@ enum { RTM_GETLINKPROP, #define RTM_GETLINKPROP RTM_GETLINKPROP + RTM_NEWVLAN = 112, +#define RTM_NEWNVLAN RTM_NEWVLAN + RTM_DELVLAN, +#define RTM_DELVLAN RTM_DELVLAN + RTM_GETVLAN, +#define RTM_GETVLAN RTM_GETVLAN + __RTM_MAX, #define RTM_MAX (((__RTM_MAX + 3) & ~3) - 1) }; diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c index 40942cece51a..75a7ecf95d7f 100644 --- a/net/bridge/br_netlink.c +++ b/net/bridge/br_netlink.c @@ -1657,6 +1657,7 @@ int __init br_netlink_init(void) int err; br_mdb_init(); + br_vlan_rtnl_init(); rtnl_af_register(&br_af_ops); err = rtnl_link_register(&br_link_ops); @@ -1674,6 +1675,7 @@ out_af: void br_netlink_fini(void) { br_mdb_uninit(); + br_vlan_rtnl_uninit(); rtnl_af_unregister(&br_af_ops); rtnl_link_unregister(&br_link_ops); } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index a7dddc5d7790..1c00411ae938 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -958,6 +958,8 @@ void br_vlan_get_stats(const struct net_bridge_vlan *v, void br_vlan_port_event(struct net_bridge_port *p, unsigned long event); int br_vlan_bridge_event(struct net_device *dev, unsigned long event, void *ptr); +void br_vlan_rtnl_init(void); +void br_vlan_rtnl_uninit(void); static inline struct net_bridge_vlan_group *br_vlan_group( const struct net_bridge *br) @@ -1009,6 +1011,10 @@ static inline u16 br_get_pvid(const struct net_bridge_vlan_group *vg) return vg->pvid; } +static inline u16 br_vlan_flags(const struct net_bridge_vlan *v, u16 pvid) +{ + return v->vid == pvid ? v->flags | BRIDGE_VLAN_INFO_PVID : v->flags; +} #else static inline bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, @@ -1152,6 +1158,14 @@ static inline int br_vlan_bridge_event(struct net_device *dev, { return 0; } + +static inline void br_vlan_rtnl_init(void) +{ +} + +static inline void br_vlan_rtnl_uninit(void) +{ +} #endif struct nf_br_ops { diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index bb98984cd27d..5f2ac4f244f5 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -1505,3 +1505,151 @@ void br_vlan_port_event(struct net_bridge_port *p, unsigned long event) break; } } + +static bool br_vlan_fill_vids(struct sk_buff *skb, u16 vid, u16 flags) +{ + struct bridge_vlan_info info; + struct nlattr *nest; + + nest = nla_nest_start(skb, BRIDGE_VLANDB_ENTRY); + if (!nest) + return false; + + memset(&info, 0, sizeof(info)); + info.vid = vid; + if (flags & BRIDGE_VLAN_INFO_UNTAGGED) + info.flags |= BRIDGE_VLAN_INFO_UNTAGGED; + if (flags & BRIDGE_VLAN_INFO_PVID) + info.flags |= BRIDGE_VLAN_INFO_PVID; + + if (nla_put(skb, BRIDGE_VLANDB_ENTRY_INFO, sizeof(info), &info)) + goto out_err; + + nla_nest_end(skb, nest); + + return true; + +out_err: + nla_nest_cancel(skb, nest); + return false; +} + +static int br_vlan_dump_dev(const struct net_device *dev, + struct sk_buff *skb, + struct netlink_callback *cb) +{ + struct net_bridge_vlan_group *vg; + int idx = 0, s_idx = cb->args[1]; + struct nlmsghdr *nlh = NULL; + struct net_bridge_vlan *v; + struct net_bridge_port *p; + struct br_vlan_msg *bvm; + struct net_bridge *br; + int err = 0; + u16 pvid; + + if (!netif_is_bridge_master(dev) && !netif_is_bridge_port(dev)) + return -EINVAL; + + if (netif_is_bridge_master(dev)) { + br = netdev_priv(dev); + vg = br_vlan_group_rcu(br); + p = NULL; + } else { + p = br_port_get_rcu(dev); + if (WARN_ON(!p)) + return -EINVAL; + vg = nbp_vlan_group_rcu(p); + br = p->br; + } + + if (!vg) + return 0; + + nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, + RTM_NEWVLAN, sizeof(*bvm), NLM_F_MULTI); + if (!nlh) + return -EMSGSIZE; + bvm = nlmsg_data(nlh); + memset(bvm, 0, sizeof(*bvm)); + bvm->family = PF_BRIDGE; + bvm->ifindex = dev->ifindex; + pvid = br_get_pvid(vg); + + list_for_each_entry_rcu(v, &vg->vlan_list, vlist) { + if (!br_vlan_should_use(v)) + continue; + if (idx < s_idx) + goto skip; + if (!br_vlan_fill_vids(skb, v->vid, br_vlan_flags(v, pvid))) { + err = -EMSGSIZE; + break; + } +skip: + idx++; + } + if (err) + cb->args[1] = idx; + else + cb->args[1] = 0; + nlmsg_end(skb, nlh); + + return err; +} + +static int br_vlan_rtm_dump(struct sk_buff *skb, struct netlink_callback *cb) +{ + int idx = 0, err = 0, s_idx = cb->args[0]; + struct net *net = sock_net(skb->sk); + struct br_vlan_msg *bvm; + struct net_device *dev; + + err = nlmsg_parse(cb->nlh, sizeof(*bvm), NULL, 0, NULL, cb->extack); + if (err < 0) + return err; + + bvm = nlmsg_data(cb->nlh); + + rcu_read_lock(); + if (bvm->ifindex) { + dev = dev_get_by_index_rcu(net, bvm->ifindex); + if (!dev) { + err = -ENODEV; + goto out_err; + } + err = br_vlan_dump_dev(dev, skb, cb); + if (err && err != -EMSGSIZE) + goto out_err; + } else { + for_each_netdev_rcu(net, dev) { + if (idx < s_idx) + goto skip; + + err = br_vlan_dump_dev(dev, skb, cb); + if (err == -EMSGSIZE) + break; +skip: + idx++; + } + } + cb->args[0] = idx; + rcu_read_unlock(); + + return skb->len; + +out_err: + rcu_read_unlock(); + + return err; +} + +void br_vlan_rtnl_init(void) +{ + rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETVLAN, NULL, + br_vlan_rtm_dump, 0); +} + +void br_vlan_rtnl_uninit(void) +{ + rtnl_unregister(PF_BRIDGE, RTM_GETVLAN); +} diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c index c97fdae8f71b..b69231918686 100644 --- a/security/selinux/nlmsgtab.c +++ b/security/selinux/nlmsgtab.c @@ -85,6 +85,9 @@ static const struct nlmsg_perm nlmsg_route_perms[] = { RTM_GETNEXTHOP, NETLINK_ROUTE_SOCKET__NLMSG_READ }, { RTM_NEWLINKPROP, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, { RTM_DELLINKPROP, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, + { RTM_NEWVLAN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, + { RTM_DELVLAN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, + { RTM_GETVLAN, NETLINK_ROUTE_SOCKET__NLMSG_READ }, }; static const struct nlmsg_perm nlmsg_tcpdiag_perms[] = @@ -168,7 +171,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm) * structures at the top of this file with the new mappings * before updating the BUILD_BUG_ON() macro! */ - BUILD_BUG_ON(RTM_MAX != (RTM_NEWLINKPROP + 3)); + BUILD_BUG_ON(RTM_MAX != (RTM_NEWVLAN + 3)); err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms, sizeof(nlmsg_route_perms)); break; -- cgit v1.2.3-71-gd317 From 0ab558795184f87b532363e262357160d25f5d64 Mon Sep 17 00:00:00 2001 From: Nikolay Aleksandrov Date: Tue, 14 Jan 2020 19:56:12 +0200 Subject: net: bridge: vlan: add rtm range support Add a new vlandb nl attribute - BRIDGE_VLANDB_ENTRY_RANGE which causes RTM_NEWVLAN/DELVAN to act on a range. Dumps now automatically compress similar vlans into ranges. This will be also used when per-vlan options are introduced and vlans' options match, they will be put into a single range which is encapsulated in one netlink attribute. We need to run similar checks as br_process_vlan_info() does because these ranges will be used for options setting and they'll be able to skip br_process_vlan_info(). Signed-off-by: Nikolay Aleksandrov Signed-off-by: David S. Miller --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_vlan.c | 86 +++++++++++++++++++++++++++++++++++------- 2 files changed, 73 insertions(+), 14 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 4da04f77d9ee..ac38f0b674b8 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -189,6 +189,7 @@ enum { enum { BRIDGE_VLANDB_ENTRY_UNSPEC, BRIDGE_VLANDB_ENTRY_INFO, + BRIDGE_VLANDB_ENTRY_RANGE, __BRIDGE_VLANDB_ENTRY_MAX, }; #define BRIDGE_VLANDB_ENTRY_MAX (__BRIDGE_VLANDB_ENTRY_MAX - 1) diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 89d5fa75c575..9d64a86f2cbd 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -1506,7 +1506,8 @@ void br_vlan_port_event(struct net_bridge_port *p, unsigned long event) } } -static bool br_vlan_fill_vids(struct sk_buff *skb, u16 vid, u16 flags) +static bool br_vlan_fill_vids(struct sk_buff *skb, u16 vid, u16 vid_range, + u16 flags) { struct bridge_vlan_info info; struct nlattr *nest; @@ -1525,6 +1526,11 @@ static bool br_vlan_fill_vids(struct sk_buff *skb, u16 vid, u16 flags) if (nla_put(skb, BRIDGE_VLANDB_ENTRY_INFO, sizeof(info), &info)) goto out_err; + if (vid_range && vid < vid_range && + !(flags & BRIDGE_VLAN_INFO_PVID) && + nla_put_u16(skb, BRIDGE_VLANDB_ENTRY_RANGE, vid_range)) + goto out_err; + nla_nest_end(skb, nest); return true; @@ -1534,14 +1540,22 @@ out_err: return false; } +/* check if v_curr can enter a range ending in range_end */ +static bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr, + const struct net_bridge_vlan *range_end) +{ + return v_curr->vid - range_end->vid == 1 && + range_end->flags == v_curr->flags; +} + static int br_vlan_dump_dev(const struct net_device *dev, struct sk_buff *skb, struct netlink_callback *cb) { + struct net_bridge_vlan *v, *range_start = NULL, *range_end = NULL; struct net_bridge_vlan_group *vg; int idx = 0, s_idx = cb->args[1]; struct nlmsghdr *nlh = NULL; - struct net_bridge_vlan *v; struct net_bridge_port *p; struct br_vlan_msg *bvm; struct net_bridge *br; @@ -1576,22 +1590,49 @@ static int br_vlan_dump_dev(const struct net_device *dev, bvm->ifindex = dev->ifindex; pvid = br_get_pvid(vg); + /* idx must stay at range's beginning until it is filled in */ list_for_each_entry_rcu(v, &vg->vlan_list, vlist) { if (!br_vlan_should_use(v)) continue; - if (idx < s_idx) - goto skip; - if (!br_vlan_fill_vids(skb, v->vid, br_vlan_flags(v, pvid))) { - err = -EMSGSIZE; - break; + if (idx < s_idx) { + idx++; + continue; } -skip: - idx++; + + if (!range_start) { + range_start = v; + range_end = v; + continue; + } + + if (v->vid == pvid || !br_vlan_can_enter_range(v, range_end)) { + u16 flags = br_vlan_flags(range_start, pvid); + + if (!br_vlan_fill_vids(skb, range_start->vid, + range_end->vid, flags)) { + err = -EMSGSIZE; + break; + } + /* advance number of filled vlans */ + idx += range_end->vid - range_start->vid + 1; + + range_start = v; + } + range_end = v; } - if (err) - cb->args[1] = idx; - else - cb->args[1] = 0; + + /* err will be 0 and range_start will be set in 3 cases here: + * - first vlan (range_start == range_end) + * - last vlan (range_start == range_end, not in range) + * - last vlan range (range_start != range_end, in range) + */ + if (!err && range_start && + !br_vlan_fill_vids(skb, range_start->vid, range_end->vid, + br_vlan_flags(range_start, pvid))) + err = -EMSGSIZE; + + cb->args[1] = err ? idx : 0; + nlmsg_end(skb, nlh); return err; @@ -1646,13 +1687,14 @@ out_err: static const struct nla_policy br_vlan_db_policy[BRIDGE_VLANDB_ENTRY_MAX + 1] = { [BRIDGE_VLANDB_ENTRY_INFO] = { .type = NLA_EXACT_LEN, .len = sizeof(struct bridge_vlan_info) }, + [BRIDGE_VLANDB_ENTRY_RANGE] = { .type = NLA_U16 }, }; static int br_vlan_rtm_process_one(struct net_device *dev, const struct nlattr *attr, int cmd, struct netlink_ext_ack *extack) { - struct bridge_vlan_info *vinfo, *vinfo_last = NULL; + struct bridge_vlan_info *vinfo, vrange_end, *vinfo_last = NULL; struct nlattr *tb[BRIDGE_VLANDB_ENTRY_MAX + 1]; struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; @@ -1683,6 +1725,7 @@ static int br_vlan_rtm_process_one(struct net_device *dev, NL_SET_ERR_MSG_MOD(extack, "Missing vlan entry info"); return -EINVAL; } + memset(&vrange_end, 0, sizeof(vrange_end)); vinfo = nla_data(tb[BRIDGE_VLANDB_ENTRY_INFO]); if (vinfo->flags & (BRIDGE_VLAN_INFO_RANGE_BEGIN | @@ -1693,6 +1736,21 @@ static int br_vlan_rtm_process_one(struct net_device *dev, if (!br_vlan_valid_id(vinfo->vid, extack)) return -EINVAL; + if (tb[BRIDGE_VLANDB_ENTRY_RANGE]) { + vrange_end.vid = nla_get_u16(tb[BRIDGE_VLANDB_ENTRY_RANGE]); + /* validate user-provided flags without RANGE_BEGIN */ + vrange_end.flags = BRIDGE_VLAN_INFO_RANGE_END | vinfo->flags; + vinfo->flags |= BRIDGE_VLAN_INFO_RANGE_BEGIN; + + /* vinfo_last is the range start, vinfo the range end */ + vinfo_last = vinfo; + vinfo = &vrange_end; + + if (!br_vlan_valid_id(vinfo->vid, extack) || + !br_vlan_valid_range(vinfo, vinfo_last, extack)) + return -EINVAL; + } + switch (cmd) { case RTM_NEWVLAN: cmdmap = RTM_SETLINK; -- cgit v1.2.3-71-gd317 From cf5bddb95cbe5340e486e84d46cf2f0bb324078c Mon Sep 17 00:00:00 2001 From: Nikolay Aleksandrov Date: Tue, 14 Jan 2020 19:56:13 +0200 Subject: net: bridge: vlan: add rtnetlink group and notify support Add a new rtnetlink group for bridge vlan notifications - RTNLGRP_BRVLAN and add support for sending vlan notifications (both single and ranges). No functional changes intended, the notification support will be used by later patches. Signed-off-by: Nikolay Aleksandrov Signed-off-by: David S. Miller --- include/uapi/linux/rtnetlink.h | 2 ++ net/bridge/br_private.h | 11 ++++++ net/bridge/br_vlan.c | 79 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h index b333cff14071..4a8c5b745157 100644 --- a/include/uapi/linux/rtnetlink.h +++ b/include/uapi/linux/rtnetlink.h @@ -730,6 +730,8 @@ enum rtnetlink_groups { #define RTNLGRP_IPV6_MROUTE_R RTNLGRP_IPV6_MROUTE_R RTNLGRP_NEXTHOP, #define RTNLGRP_NEXTHOP RTNLGRP_NEXTHOP + RTNLGRP_BRVLAN, +#define RTNLGRP_BRVLAN RTNLGRP_BRVLAN __RTNLGRP_MAX }; #define RTNLGRP_MAX (__RTNLGRP_MAX - 1) diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index ee3871dea68f..ba162c8197da 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -960,6 +960,10 @@ int br_vlan_bridge_event(struct net_device *dev, unsigned long event, void *ptr); void br_vlan_rtnl_init(void); void br_vlan_rtnl_uninit(void); +void br_vlan_notify(const struct net_bridge *br, + const struct net_bridge_port *p, + u16 vid, u16 vid_range, + int cmd); static inline struct net_bridge_vlan_group *br_vlan_group( const struct net_bridge *br) @@ -1166,6 +1170,13 @@ static inline void br_vlan_rtnl_init(void) static inline void br_vlan_rtnl_uninit(void) { } + +static inline void br_vlan_notify(const struct net_bridge *br, + const struct net_bridge_port *p, + u16 vid, u16 vid_range, + int cmd) +{ +} #endif struct nf_br_ops { diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 9d64a86f2cbd..5d52a2604547 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -1540,6 +1540,85 @@ out_err: return false; } +static size_t rtnl_vlan_nlmsg_size(void) +{ + return NLMSG_ALIGN(sizeof(struct br_vlan_msg)) + + nla_total_size(0) /* BRIDGE_VLANDB_ENTRY */ + + nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_ENTRY_RANGE */ + + nla_total_size(sizeof(struct bridge_vlan_info)); /* BRIDGE_VLANDB_ENTRY_INFO */ +} + +void br_vlan_notify(const struct net_bridge *br, + const struct net_bridge_port *p, + u16 vid, u16 vid_range, + int cmd) +{ + struct net_bridge_vlan_group *vg; + struct net_bridge_vlan *v; + struct br_vlan_msg *bvm; + struct nlmsghdr *nlh; + struct sk_buff *skb; + int err = -ENOBUFS; + struct net *net; + u16 flags = 0; + int ifindex; + + /* right now notifications are done only with rtnl held */ + ASSERT_RTNL(); + + if (p) { + ifindex = p->dev->ifindex; + vg = nbp_vlan_group(p); + net = dev_net(p->dev); + } else { + ifindex = br->dev->ifindex; + vg = br_vlan_group(br); + net = dev_net(br->dev); + } + + skb = nlmsg_new(rtnl_vlan_nlmsg_size(), GFP_KERNEL); + if (!skb) + goto out_err; + + err = -EMSGSIZE; + nlh = nlmsg_put(skb, 0, 0, cmd, sizeof(*bvm), 0); + if (!nlh) + goto out_err; + bvm = nlmsg_data(nlh); + memset(bvm, 0, sizeof(*bvm)); + bvm->family = AF_BRIDGE; + bvm->ifindex = ifindex; + + switch (cmd) { + case RTM_NEWVLAN: + /* need to find the vlan due to flags/options */ + v = br_vlan_find(vg, vid); + if (!v || !br_vlan_should_use(v)) + goto out_kfree; + + flags = v->flags; + if (br_get_pvid(vg) == v->vid) + flags |= BRIDGE_VLAN_INFO_PVID; + break; + case RTM_DELVLAN: + break; + default: + goto out_kfree; + } + + if (!br_vlan_fill_vids(skb, vid, vid_range, flags)) + goto out_err; + + nlmsg_end(skb, nlh); + rtnl_notify(skb, net, 0, RTNLGRP_BRVLAN, NULL, GFP_KERNEL); + return; + +out_err: + rtnl_set_sk_err(net, RTNLGRP_BRVLAN, err); +out_kfree: + kfree_skb(skb); +} + /* check if v_curr can enter a range ending in range_end */ static bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr, const struct net_bridge_vlan *range_end) -- cgit v1.2.3-71-gd317 From 8482941f09067da42f9c3362e15bfb3f3c19d610 Mon Sep 17 00:00:00 2001 From: Yonghong Song Date: Tue, 14 Jan 2020 19:50:02 -0800 Subject: bpf: Add bpf_send_signal_thread() helper Commit 8b401f9ed244 ("bpf: implement bpf_send_signal() helper") added helper bpf_send_signal() which permits bpf program to send a signal to the current process. The signal may be delivered to any threads in the process. We found a use case where sending the signal to the current thread is more preferable. - A bpf program will collect the stack trace and then send signal to the user application. - The user application will add some thread specific information to the just collected stack trace for later analysis. If bpf_send_signal() is used, user application will need to check whether the thread receiving the signal matches the thread collecting the stack by checking thread id. If not, it will need to send signal to another thread through pthread_kill(). This patch proposed a new helper bpf_send_signal_thread(), which sends the signal to the thread corresponding to the current kernel task. This way, user space is guaranteed that bpf_program execution context and user space signal handling context are the same thread. Signed-off-by: Yonghong Song Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200115035002.602336-1-yhs@fb.com --- include/uapi/linux/bpf.h | 19 +++++++++++++++++-- kernel/trace/bpf_trace.c | 27 ++++++++++++++++++++++++--- tools/include/uapi/linux/bpf.h | 19 +++++++++++++++++-- 3 files changed, 58 insertions(+), 7 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 52966e758fe5..a88837d4dd18 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2714,7 +2714,8 @@ union bpf_attr { * * int bpf_send_signal(u32 sig) * Description - * Send signal *sig* to the current task. + * Send signal *sig* to the process of the current task. + * The signal may be delivered to any of this process's threads. * Return * 0 on success or successfully queued. * @@ -2850,6 +2851,19 @@ union bpf_attr { * Return * 0 on success, or a negative error in case of failure. * + * int bpf_send_signal_thread(u32 sig) + * Description + * Send signal *sig* to the thread corresponding to the current task. + * Return + * 0 on success or successfully queued. + * + * **-EBUSY** if work queue under nmi is full. + * + * **-EINVAL** if *sig* is invalid. + * + * **-EPERM** if no permission to send the *sig*. + * + * **-EAGAIN** if bpf program can try again. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -2968,7 +2982,8 @@ union bpf_attr { FN(probe_read_kernel), \ FN(probe_read_user_str), \ FN(probe_read_kernel_str), \ - FN(tcp_send_ack), + FN(tcp_send_ack), \ + FN(send_signal_thread), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e5ef4ae9edb5..19e793aa441a 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -703,6 +703,7 @@ struct send_signal_irq_work { struct irq_work irq_work; struct task_struct *task; u32 sig; + enum pid_type type; }; static DEFINE_PER_CPU(struct send_signal_irq_work, send_signal_work); @@ -712,10 +713,10 @@ static void do_bpf_send_signal(struct irq_work *entry) struct send_signal_irq_work *work; work = container_of(entry, struct send_signal_irq_work, irq_work); - group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, PIDTYPE_TGID); + group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, work->type); } -BPF_CALL_1(bpf_send_signal, u32, sig) +static int bpf_send_signal_common(u32 sig, enum pid_type type) { struct send_signal_irq_work *work = NULL; @@ -748,11 +749,17 @@ BPF_CALL_1(bpf_send_signal, u32, sig) */ work->task = current; work->sig = sig; + work->type = type; irq_work_queue(&work->irq_work); return 0; } - return group_send_sig_info(sig, SEND_SIG_PRIV, current, PIDTYPE_TGID); + return group_send_sig_info(sig, SEND_SIG_PRIV, current, type); +} + +BPF_CALL_1(bpf_send_signal, u32, sig) +{ + return bpf_send_signal_common(sig, PIDTYPE_TGID); } static const struct bpf_func_proto bpf_send_signal_proto = { @@ -762,6 +769,18 @@ static const struct bpf_func_proto bpf_send_signal_proto = { .arg1_type = ARG_ANYTHING, }; +BPF_CALL_1(bpf_send_signal_thread, u32, sig) +{ + return bpf_send_signal_common(sig, PIDTYPE_PID); +} + +static const struct bpf_func_proto bpf_send_signal_thread_proto = { + .func = bpf_send_signal_thread, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -822,6 +841,8 @@ tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) #endif case BPF_FUNC_send_signal: return &bpf_send_signal_proto; + case BPF_FUNC_send_signal_thread: + return &bpf_send_signal_thread_proto; default: return NULL; } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 52966e758fe5..a88837d4dd18 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -2714,7 +2714,8 @@ union bpf_attr { * * int bpf_send_signal(u32 sig) * Description - * Send signal *sig* to the current task. + * Send signal *sig* to the process of the current task. + * The signal may be delivered to any of this process's threads. * Return * 0 on success or successfully queued. * @@ -2850,6 +2851,19 @@ union bpf_attr { * Return * 0 on success, or a negative error in case of failure. * + * int bpf_send_signal_thread(u32 sig) + * Description + * Send signal *sig* to the thread corresponding to the current task. + * Return + * 0 on success or successfully queued. + * + * **-EBUSY** if work queue under nmi is full. + * + * **-EINVAL** if *sig* is invalid. + * + * **-EPERM** if no permission to send the *sig*. + * + * **-EAGAIN** if bpf program can try again. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -2968,7 +2982,8 @@ union bpf_attr { FN(probe_read_kernel), \ FN(probe_read_user_str), \ FN(probe_read_kernel_str), \ - FN(tcp_send_ack), + FN(tcp_send_ack), \ + FN(send_signal_thread), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call -- cgit v1.2.3-71-gd317 From 600a87490ff9823d065fc15e86c709e707033ecc Mon Sep 17 00:00:00 2001 From: Alain Michaud Date: Tue, 7 Jan 2020 00:43:17 +0000 Subject: Bluetooth: Implementation of MGMT_OP_SET_BLOCKED_KEYS. MGMT command is added to receive the list of blocked keys from user-space. The list is used to: 1) Block keys from being distributed by the device during the ke distribution phase of SMP. 2) Filter out any keys that were previously saved so they are no longer used. Signed-off-by: Alain Michaud Signed-off-by: Marcel Holtmann --- include/net/bluetooth/hci_core.h | 10 +++++ include/net/bluetooth/mgmt.h | 17 ++++++++ net/bluetooth/hci_core.c | 85 ++++++++++++++++++++++++++++++++++++---- net/bluetooth/hci_debugfs.c | 17 ++++++++ net/bluetooth/mgmt.c | 76 +++++++++++++++++++++++++++++++++++ net/bluetooth/smp.c | 18 +++++++++ 6 files changed, 215 insertions(+), 8 deletions(-) (limited to 'include') diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index faebe3859931..89ecf0a80aa1 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -118,6 +118,13 @@ struct bt_uuid { u8 svc_hint; }; +struct blocked_key { + struct list_head list; + struct rcu_head rcu; + u8 type; + u8 val[16]; +}; + struct smp_csrk { bdaddr_t bdaddr; u8 bdaddr_type; @@ -397,6 +404,7 @@ struct hci_dev { struct list_head le_conn_params; struct list_head pend_le_conns; struct list_head pend_le_reports; + struct list_head blocked_keys; struct hci_dev_stats stat; @@ -1123,6 +1131,8 @@ struct smp_irk *hci_find_irk_by_addr(struct hci_dev *hdev, bdaddr_t *bdaddr, struct smp_irk *hci_add_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type, u8 val[16], bdaddr_t *rpa); void hci_remove_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type); +bool hci_is_blocked_key(struct hci_dev *hdev, u8 type, u8 val[16]); +void hci_blocked_keys_clear(struct hci_dev *hdev); void hci_smp_irks_clear(struct hci_dev *hdev); bool hci_bdaddr_is_paired(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 type); diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h index 9cee7ddc6741..a90666af05bd 100644 --- a/include/net/bluetooth/mgmt.h +++ b/include/net/bluetooth/mgmt.h @@ -654,6 +654,23 @@ struct mgmt_cp_set_phy_confguration { } __packed; #define MGMT_SET_PHY_CONFIGURATION_SIZE 4 +#define MGMT_OP_SET_BLOCKED_KEYS 0x0046 + +#define HCI_BLOCKED_KEY_TYPE_LINKKEY 0x00 +#define HCI_BLOCKED_KEY_TYPE_LTK 0x01 +#define HCI_BLOCKED_KEY_TYPE_IRK 0x02 + +struct mgmt_blocked_key_info { + __u8 type; + __u8 val[16]; +} __packed; + +struct mgmt_cp_set_blocked_keys { + __le16 key_count; + struct mgmt_blocked_key_info keys[0]; +} __packed; +#define MGMT_OP_SET_BLOCKED_KEYS_SIZE 2 + #define MGMT_EV_CMD_COMPLETE 0x0001 struct mgmt_ev_cmd_complete { __le16 opcode; diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 9e19d5a3aac8..f0298db26dc3 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -2311,6 +2311,33 @@ void hci_smp_irks_clear(struct hci_dev *hdev) } } +void hci_blocked_keys_clear(struct hci_dev *hdev) +{ + struct blocked_key *b; + + list_for_each_entry_rcu(b, &hdev->blocked_keys, list) { + list_del_rcu(&b->list); + kfree_rcu(b, rcu); + } +} + +bool hci_is_blocked_key(struct hci_dev *hdev, u8 type, u8 val[16]) +{ + bool blocked = false; + struct blocked_key *b; + + rcu_read_lock(); + list_for_each_entry(b, &hdev->blocked_keys, list) { + if (b->type == type && !memcmp(b->val, val, sizeof(b->val))) { + blocked = true; + break; + } + } + + rcu_read_unlock(); + return blocked; +} + struct link_key *hci_find_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr) { struct link_key *k; @@ -2319,6 +2346,16 @@ struct link_key *hci_find_link_key(struct hci_dev *hdev, bdaddr_t *bdaddr) list_for_each_entry_rcu(k, &hdev->link_keys, list) { if (bacmp(bdaddr, &k->bdaddr) == 0) { rcu_read_unlock(); + + if (hci_is_blocked_key(hdev, + HCI_BLOCKED_KEY_TYPE_LINKKEY, + k->val)) { + bt_dev_warn_ratelimited(hdev, + "Link key blocked for %pMR", + &k->bdaddr); + return NULL; + } + return k; } } @@ -2387,6 +2424,15 @@ struct smp_ltk *hci_find_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, if (smp_ltk_is_sc(k) || ltk_role(k->type) == role) { rcu_read_unlock(); + + if (hci_is_blocked_key(hdev, HCI_BLOCKED_KEY_TYPE_LTK, + k->val)) { + bt_dev_warn_ratelimited(hdev, + "LTK blocked for %pMR", + &k->bdaddr); + return NULL; + } + return k; } } @@ -2397,31 +2443,42 @@ struct smp_ltk *hci_find_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, struct smp_irk *hci_find_irk_by_rpa(struct hci_dev *hdev, bdaddr_t *rpa) { + struct smp_irk *irk_to_return = NULL; struct smp_irk *irk; rcu_read_lock(); list_for_each_entry_rcu(irk, &hdev->identity_resolving_keys, list) { if (!bacmp(&irk->rpa, rpa)) { - rcu_read_unlock(); - return irk; + irk_to_return = irk; + goto done; } } list_for_each_entry_rcu(irk, &hdev->identity_resolving_keys, list) { if (smp_irk_matches(hdev, irk->val, rpa)) { bacpy(&irk->rpa, rpa); - rcu_read_unlock(); - return irk; + irk_to_return = irk; + goto done; } } + +done: + if (irk_to_return && hci_is_blocked_key(hdev, HCI_BLOCKED_KEY_TYPE_IRK, + irk_to_return->val)) { + bt_dev_warn_ratelimited(hdev, "Identity key blocked for %pMR", + &irk_to_return->bdaddr); + irk_to_return = NULL; + } + rcu_read_unlock(); - return NULL; + return irk_to_return; } struct smp_irk *hci_find_irk_by_addr(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type) { + struct smp_irk *irk_to_return = NULL; struct smp_irk *irk; /* Identity Address must be public or static random */ @@ -2432,13 +2489,23 @@ struct smp_irk *hci_find_irk_by_addr(struct hci_dev *hdev, bdaddr_t *bdaddr, list_for_each_entry_rcu(irk, &hdev->identity_resolving_keys, list) { if (addr_type == irk->addr_type && bacmp(bdaddr, &irk->bdaddr) == 0) { - rcu_read_unlock(); - return irk; + irk_to_return = irk; + goto done; } } + +done: + + if (irk_to_return && hci_is_blocked_key(hdev, HCI_BLOCKED_KEY_TYPE_IRK, + irk_to_return->val)) { + bt_dev_warn_ratelimited(hdev, "Identity key blocked for %pMR", + &irk_to_return->bdaddr); + irk_to_return = NULL; + } + rcu_read_unlock(); - return NULL; + return irk_to_return; } struct link_key *hci_add_link_key(struct hci_dev *hdev, struct hci_conn *conn, @@ -3244,6 +3311,7 @@ struct hci_dev *hci_alloc_dev(void) INIT_LIST_HEAD(&hdev->pend_le_reports); INIT_LIST_HEAD(&hdev->conn_hash.list); INIT_LIST_HEAD(&hdev->adv_instances); + INIT_LIST_HEAD(&hdev->blocked_keys); INIT_WORK(&hdev->rx_work, hci_rx_work); INIT_WORK(&hdev->cmd_work, hci_cmd_work); @@ -3443,6 +3511,7 @@ void hci_unregister_dev(struct hci_dev *hdev) hci_bdaddr_list_clear(&hdev->le_resolv_list); hci_conn_params_clear_all(hdev); hci_discovery_filter_clear(hdev); + hci_blocked_keys_clear(hdev); hci_dev_unlock(hdev); hci_dev_put(hdev); diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c index 402e2cc54044..1c8100bc4e04 100644 --- a/net/bluetooth/hci_debugfs.c +++ b/net/bluetooth/hci_debugfs.c @@ -152,6 +152,21 @@ static int blacklist_show(struct seq_file *f, void *p) DEFINE_SHOW_ATTRIBUTE(blacklist); +static int blocked_keys_show(struct seq_file *f, void *p) +{ + struct hci_dev *hdev = f->private; + struct blocked_key *key; + + rcu_read_lock(); + list_for_each_entry_rcu(key, &hdev->blocked_keys, list) + seq_printf(f, "%u %*phN\n", key->type, 16, key->val); + rcu_read_unlock(); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(blocked_keys); + static int uuids_show(struct seq_file *f, void *p) { struct hci_dev *hdev = f->private; @@ -308,6 +323,8 @@ void hci_debugfs_create_common(struct hci_dev *hdev) &device_list_fops); debugfs_create_file("blacklist", 0444, hdev->debugfs, hdev, &blacklist_fops); + debugfs_create_file("blocked_keys", 0444, hdev->debugfs, hdev, + &blocked_keys_fops); debugfs_create_file("uuids", 0444, hdev->debugfs, hdev, &uuids_fops); debugfs_create_file("remote_oob", 0400, hdev->debugfs, hdev, &remote_oob_fops); diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index acb7c6d5643f..339c762eb6fd 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -106,6 +106,7 @@ static const u16 mgmt_commands[] = { MGMT_OP_START_LIMITED_DISCOVERY, MGMT_OP_READ_EXT_INFO, MGMT_OP_SET_APPEARANCE, + MGMT_OP_SET_BLOCKED_KEYS, }; static const u16 mgmt_events[] = { @@ -2341,6 +2342,14 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data, for (i = 0; i < key_count; i++) { struct mgmt_link_key_info *key = &cp->keys[i]; + if (hci_is_blocked_key(hdev, + HCI_BLOCKED_KEY_TYPE_LINKKEY, + key->val)) { + bt_dev_warn(hdev, "Skipping blocked link key for %pMR", + &key->addr.bdaddr); + continue; + } + /* Always ignore debug keys and require a new pairing if * the user wants to use them. */ @@ -3531,6 +3540,55 @@ unlock: return err; } +static int set_blocked_keys(struct sock *sk, struct hci_dev *hdev, void *data, + u16 len) +{ + int err = MGMT_STATUS_SUCCESS; + struct mgmt_cp_set_blocked_keys *keys = data; + const u16 max_key_count = ((U16_MAX - sizeof(*keys)) / + sizeof(struct mgmt_blocked_key_info)); + u16 key_count, expected_len; + int i; + + BT_DBG("request for %s", hdev->name); + + key_count = __le16_to_cpu(keys->key_count); + if (key_count > max_key_count) { + bt_dev_err(hdev, "too big key_count value %u", key_count); + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_BLOCKED_KEYS, + MGMT_STATUS_INVALID_PARAMS); + } + + expected_len = struct_size(keys, keys, key_count); + if (expected_len != len) { + bt_dev_err(hdev, "expected %u bytes, got %u bytes", + expected_len, len); + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_BLOCKED_KEYS, + MGMT_STATUS_INVALID_PARAMS); + } + + hci_dev_lock(hdev); + + hci_blocked_keys_clear(hdev); + + for (i = 0; i < keys->key_count; ++i) { + struct blocked_key *b = kzalloc(sizeof(*b), GFP_KERNEL); + + if (!b) { + err = MGMT_STATUS_NO_RESOURCES; + break; + } + + b->type = keys->keys[i].type; + memcpy(b->val, keys->keys[i].val, sizeof(b->val)); + list_add_rcu(&b->list, &hdev->blocked_keys); + } + hci_dev_unlock(hdev); + + return mgmt_cmd_complete(sk, hdev->id, MGMT_OP_SET_BLOCKED_KEYS, + err, NULL, 0); +} + static void read_local_oob_data_complete(struct hci_dev *hdev, u8 status, u16 opcode, struct sk_buff *skb) { @@ -5051,6 +5109,14 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data, for (i = 0; i < irk_count; i++) { struct mgmt_irk_info *irk = &cp->irks[i]; + if (hci_is_blocked_key(hdev, + HCI_BLOCKED_KEY_TYPE_IRK, + irk->val)) { + bt_dev_warn(hdev, "Skipping blocked IRK for %pMR", + &irk->addr.bdaddr); + continue; + } + hci_add_irk(hdev, &irk->addr.bdaddr, le_addr_type(irk->addr.type), irk->val, BDADDR_ANY); @@ -5134,6 +5200,14 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev, struct mgmt_ltk_info *key = &cp->keys[i]; u8 type, authenticated; + if (hci_is_blocked_key(hdev, + HCI_BLOCKED_KEY_TYPE_LTK, + key->val)) { + bt_dev_warn(hdev, "Skipping blocked LTK for %pMR", + &key->addr.bdaddr); + continue; + } + switch (key->type) { case MGMT_LTK_UNAUTHENTICATED: authenticated = 0x00; @@ -6914,6 +6988,8 @@ static const struct hci_mgmt_handler mgmt_handlers[] = { { set_appearance, MGMT_SET_APPEARANCE_SIZE }, { get_phy_configuration, MGMT_GET_PHY_CONFIGURATION_SIZE }, { set_phy_configuration, MGMT_SET_PHY_CONFIGURATION_SIZE }, + { set_blocked_keys, MGMT_OP_SET_BLOCKED_KEYS_SIZE, + HCI_MGMT_VAR_LEN }, }; void mgmt_index_added(struct hci_dev *hdev) diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index 6b42be4b5861..4ece170c518e 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -2453,6 +2453,15 @@ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb) if (skb->len < sizeof(*rp)) return SMP_INVALID_PARAMS; + /* Pairing is aborted if any blocked keys are distributed */ + if (hci_is_blocked_key(conn->hcon->hdev, HCI_BLOCKED_KEY_TYPE_LTK, + rp->ltk)) { + bt_dev_warn_ratelimited(conn->hcon->hdev, + "LTK blocked for %pMR", + &conn->hcon->dst); + return SMP_INVALID_PARAMS; + } + SMP_ALLOW_CMD(smp, SMP_CMD_MASTER_IDENT); skb_pull(skb, sizeof(*rp)); @@ -2509,6 +2518,15 @@ static int smp_cmd_ident_info(struct l2cap_conn *conn, struct sk_buff *skb) if (skb->len < sizeof(*info)) return SMP_INVALID_PARAMS; + /* Pairing is aborted if any blocked keys are distributed */ + if (hci_is_blocked_key(conn->hcon->hdev, HCI_BLOCKED_KEY_TYPE_IRK, + info->irk)) { + bt_dev_warn_ratelimited(conn->hcon->hdev, + "Identity key blocked for %pMR", + &conn->hcon->dst); + return SMP_INVALID_PARAMS; + } + SMP_ALLOW_CMD(smp, SMP_CMD_IDENT_ADDR_INFO); skb_pull(skb, sizeof(*info)); -- cgit v1.2.3-71-gd317 From 4de0fc599eb936d37542f819e931ba3fd8e435ca Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Wed, 15 Jan 2020 13:02:11 -0800 Subject: Bluetooth: Add definitions for CIS connections These adds the HCI definitions for handling CIS connections along with ISO data packets. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann --- include/net/bluetooth/hci.h | 159 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 158 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 07b6ecedc6ce..6293bdd7d862 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -27,6 +27,7 @@ #define HCI_MAX_ACL_SIZE 1024 #define HCI_MAX_SCO_SIZE 255 +#define HCI_MAX_ISO_SIZE 251 #define HCI_MAX_EVENT_SIZE 260 #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4) @@ -303,6 +304,7 @@ enum { #define HCI_ACLDATA_PKT 0x02 #define HCI_SCODATA_PKT 0x03 #define HCI_EVENT_PKT 0x04 +#define HCI_ISODATA_PKT 0x05 #define HCI_DIAG_PKT 0xf0 #define HCI_VENDOR_PKT 0xff @@ -352,6 +354,15 @@ enum { #define ACL_ACTIVE_BCAST 0x04 #define ACL_PICO_BCAST 0x08 +/* ISO PB flags */ +#define ISO_START 0x00 +#define ISO_CONT 0x01 +#define ISO_SINGLE 0x02 +#define ISO_END 0x03 + +/* ISO TS flags */ +#define ISO_TS 0x01 + /* Baseband links */ #define SCO_LINK 0x00 #define ACL_LINK 0x01 @@ -359,6 +370,7 @@ enum { /* Low Energy links do not have defined link type. Use invented one */ #define LE_LINK 0x80 #define AMP_LINK 0x81 +#define ISO_LINK 0x82 #define INVALID_LINK 0xff /* LMP features */ @@ -440,6 +452,8 @@ enum { #define HCI_LE_PHY_2M 0x01 #define HCI_LE_PHY_CODED 0x08 #define HCI_LE_CHAN_SEL_ALG2 0x40 +#define HCI_LE_CIS_MASTER 0x10 +#define HCI_LE_CIS_SLAVE 0x20 /* Connection modes */ #define HCI_CM_ACTIVE 0x0000 @@ -1718,6 +1732,86 @@ struct hci_cp_le_set_adv_set_rand_addr { bdaddr_t bdaddr; } __packed; +#define HCI_OP_LE_READ_BUFFER_SIZE_V2 0x2060 +struct hci_rp_le_read_buffer_size_v2 { + __u8 status; + __le16 acl_mtu; + __u8 acl_max_pkt; + __le16 iso_mtu; + __u8 iso_max_pkt; +} __packed; + +#define HCI_OP_LE_READ_ISO_TX_SYNC 0x2061 +struct hci_cp_le_read_iso_tx_sync { + __le16 handle; +} __packed; + +struct hci_rp_le_read_iso_tx_sync { + __u8 status; + __le16 handle; + __le16 seq; + __le32 imestamp; + __u8 offset[3]; +} __packed; + +#define HCI_OP_LE_SET_CIG_PARAMS 0x2062 +struct hci_cis_params { + __u8 cis_id; + __le16 m_sdu; + __le16 s_sdu; + __u8 m_phy; + __u8 s_phy; + __u8 m_rtn; + __u8 s_rtn; +} __packed; + +struct hci_cp_le_set_cig_params { + __u8 cig_id; + __u8 m_interval[3]; + __u8 s_interval[3]; + __u8 sca; + __u8 packing; + __u8 framing; + __le16 m_latency; + __le16 s_latency; + __u8 num_cis; + struct hci_cis_params cis[0]; +} __packed; + +struct hci_rp_le_set_cig_params { + __u8 status; + __u8 cig_id; + __u8 num_handles; + __le16 handle[0]; +} __packed; + +#define HCI_OP_LE_CREATE_CIS 0x2064 +struct hci_cis { + __le16 cis_handle; + __le16 acl_handle; +} __packed; + +struct hci_cp_le_create_cis { + __u8 num_cis; + struct hci_cis cis[0]; +} __packed; + +#define HCI_OP_LE_REMOVE_CIG 0x2065 +struct hci_cp_le_remove_cig { + __u8 cig_id; +} __packed; + +#define HCI_OP_LE_ACCEPT_CIS 0x2066 +struct hci_cp_le_accept_cis { + __le16 handle; +} __packed; + +#define HCI_OP_LE_REJECT_CIS 0x2067 +struct hci_cp_le_reject_cis { + __le16 handle; + __u8 reason; +} __packed; + /* ---- HCI Events ---- */ #define HCI_EV_INQUIRY_COMPLETE 0x01 @@ -2189,7 +2283,7 @@ struct hci_ev_le_direct_adv_info { #define HCI_EV_LE_PHY_UPDATE_COMPLETE 0x0c struct hci_ev_le_phy_update_complete { __u8 status; - __u16 handle; + __le16 handle; __u8 tx_phy; __u8 rx_phy; } __packed; @@ -2234,6 +2328,34 @@ struct hci_evt_le_ext_adv_set_term { __u8 num_evts; } __packed; +#define HCI_EVT_LE_CIS_ESTABLISHED 0x19 +struct hci_evt_le_cis_established { + __u8 status; + __le16 handle; + __u8 cig_sync_delay[3]; + __u8 cis_sync_delay[3]; + __u8 m_latency[3]; + __u8 s_latency[3]; + __u8 m_phy; + __u8 s_phy; + __u8 nse; + __u8 m_bn; + __u8 s_bn; + __u8 m_ft; + __u8 s_ft; + __le16 m_mtu; + __le16 s_mtu; + __le16 interval; +} __packed; + +#define HCI_EVT_LE_CIS_REQ 0x1a +struct hci_evt_le_cis_req { + __le16 acl_handle; + __le16 cis_handle; + __u8 cig_id; + __u8 cis_id; +} __packed; + #define HCI_EV_VENDOR 0xff /* Internal events generated by Bluetooth stack */ @@ -2262,6 +2384,7 @@ struct hci_ev_si_security { #define HCI_EVENT_HDR_SIZE 2 #define HCI_ACL_HDR_SIZE 4 #define HCI_SCO_HDR_SIZE 3 +#define HCI_ISO_HDR_SIZE 4 struct hci_command_hdr { __le16 opcode; /* OCF & OGF */ @@ -2283,6 +2406,30 @@ struct hci_sco_hdr { __u8 dlen; } __packed; +struct hci_iso_hdr { + __le16 handle; + __le16 dlen; + __u8 data[0]; +} __packed; + +/* ISO data packet status flags */ +#define HCI_ISO_STATUS_VALID 0x00 +#define HCI_ISO_STATUS_INVALID 0x01 +#define HCI_ISO_STATUS_NOP 0x02 + +#define HCI_ISO_DATA_HDR_SIZE 4 +struct hci_iso_data_hdr { + __le16 sn; + __le16 slen; +}; + +#define HCI_ISO_TS_DATA_HDR_SIZE 8 +struct hci_iso_ts_data_hdr { + __le32 ts; + __le16 sn; + __le16 slen; +}; + static inline struct hci_event_hdr *hci_event_hdr(const struct sk_buff *skb) { return (struct hci_event_hdr *) skb->data; @@ -2308,4 +2455,14 @@ static inline struct hci_sco_hdr *hci_sco_hdr(const struct sk_buff *skb) #define hci_handle(h) (h & 0x0fff) #define hci_flags(h) (h >> 12) +/* ISO handle and flags pack/unpack */ +#define hci_iso_flags_pb(f) (f & 0x0003) +#define hci_iso_flags_ts(f) ((f >> 2) & 0x0001) +#define hci_iso_flags_pack(pb, ts) ((pb & 0x03) | ((ts & 0x01) << 2)) + +/* ISO data length and flags pack/unpack */ +#define hci_iso_data_len_pack(h, f) ((__u16) ((h) | ((f) << 14))) +#define hci_iso_data_len(h) ((h) & 0x3fff) +#define hci_iso_data_flags(h) ((h) >> 14) + #endif /* __HCI_H */ -- cgit v1.2.3-71-gd317 From f9a619db7c137b7c2dec0414d8deb8ec762ae8f9 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Wed, 15 Jan 2020 13:02:17 -0800 Subject: Bluetooth: monitor: Add support for ISO packets This enables passing ISO packets to the monitor socket. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann --- include/net/bluetooth/hci_mon.h | 2 ++ net/bluetooth/hci_sock.c | 6 ++++++ 2 files changed, 8 insertions(+) (limited to 'include') diff --git a/include/net/bluetooth/hci_mon.h b/include/net/bluetooth/hci_mon.h index 240786b04a46..2d5fcda1bcd0 100644 --- a/include/net/bluetooth/hci_mon.h +++ b/include/net/bluetooth/hci_mon.h @@ -49,6 +49,8 @@ struct hci_mon_hdr { #define HCI_MON_CTRL_CLOSE 15 #define HCI_MON_CTRL_COMMAND 16 #define HCI_MON_CTRL_EVENT 17 +#define HCI_MON_ISO_TX_PKT 18 +#define HCI_MON_ISO_RX_PKT 19 struct hci_mon_new_index { __u8 type; diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c index 5d0ed28c0d3a..3ae508674ef7 100644 --- a/net/bluetooth/hci_sock.c +++ b/net/bluetooth/hci_sock.c @@ -324,6 +324,12 @@ void hci_send_to_monitor(struct hci_dev *hdev, struct sk_buff *skb) else opcode = cpu_to_le16(HCI_MON_SCO_TX_PKT); break; + case HCI_ISODATA_PKT: + if (bt_cb(skb)->incoming) + opcode = cpu_to_le16(HCI_MON_ISO_RX_PKT); + else + opcode = cpu_to_le16(HCI_MON_ISO_TX_PKT); + break; case HCI_DIAG_PKT: opcode = cpu_to_le16(HCI_MON_VENDOR_DIAG); break; -- cgit v1.2.3-71-gd317 From cb4d03ab499d4c040f4ab6fd4389d2b49f42b5a5 Mon Sep 17 00:00:00 2001 From: Brian Vazquez Date: Wed, 15 Jan 2020 10:43:01 -0800 Subject: bpf: Add generic support for lookup batch op This commit introduces generic support for the bpf_map_lookup_batch. This implementation can be used by almost all the bpf maps since its core implementation is relying on the existing map_get_next_key and map_lookup_elem. The bpf syscall subcommand introduced is: BPF_MAP_LOOKUP_BATCH The UAPI attribute is: struct { /* struct used by BPF_MAP_*_BATCH commands */ __aligned_u64 in_batch; /* start batch, * NULL to start from beginning */ __aligned_u64 out_batch; /* output: next start batch */ __aligned_u64 keys; __aligned_u64 values; __u32 count; /* input/output: * input: # of key/value * elements * output: # of filled elements */ __u32 map_fd; __u64 elem_flags; __u64 flags; } batch; in_batch/out_batch are opaque values use to communicate between user/kernel space, in_batch/out_batch must be of key_size length. To start iterating from the beginning in_batch must be null, count is the # of key/value elements to retrieve. Note that the 'keys' buffer must be a buffer of key_size * count size and the 'values' buffer must be value_size * count, where value_size must be aligned to 8 bytes by userspace if it's dealing with percpu maps. 'count' will contain the number of keys/values successfully retrieved. Note that 'count' is an input/output variable and it can contain a lower value after a call. If there's no more entries to retrieve, ENOENT will be returned. If error is ENOENT, count might be > 0 in case it copied some values but there were no more entries to retrieve. Note that if the return code is an error and not -EFAULT, count indicates the number of elements successfully processed. Suggested-by: Stanislav Fomichev Signed-off-by: Brian Vazquez Signed-off-by: Yonghong Song Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200115184308.162644-3-brianvv@google.com --- include/linux/bpf.h | 5 ++ include/uapi/linux/bpf.h | 18 ++++++ kernel/bpf/syscall.c | 160 +++++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 179 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index aed2bc39d72b..807744ecaa5a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -44,6 +44,8 @@ struct bpf_map_ops { int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key); void (*map_release_uref)(struct bpf_map *map); void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key); + int (*map_lookup_batch)(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr); /* funcs callable from userspace and from eBPF programs */ void *(*map_lookup_elem)(struct bpf_map *map, void *key); @@ -982,6 +984,9 @@ void *bpf_map_area_alloc(u64 size, int numa_node); void *bpf_map_area_mmapable_alloc(u64 size, int numa_node); void bpf_map_area_free(void *base); void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr); +int generic_map_lookup_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr); extern int sysctl_unprivileged_bpf_disabled; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index a88837d4dd18..0721b2c6bb8c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -107,6 +107,7 @@ enum bpf_cmd { BPF_MAP_LOOKUP_AND_DELETE_ELEM, BPF_MAP_FREEZE, BPF_BTF_GET_NEXT_ID, + BPF_MAP_LOOKUP_BATCH, }; enum bpf_map_type { @@ -420,6 +421,23 @@ union bpf_attr { __u64 flags; }; + struct { /* struct used by BPF_MAP_*_BATCH commands */ + __aligned_u64 in_batch; /* start batch, + * NULL to start from beginning + */ + __aligned_u64 out_batch; /* output: next start batch */ + __aligned_u64 keys; + __aligned_u64 values; + __u32 count; /* input/output: + * input: # of key/value + * elements + * output: # of filled elements + */ + __u32 map_fd; + __u64 elem_flags; + __u64 flags; + } batch; + struct { /* anonymous struct used by BPF_PROG_LOAD command */ __u32 prog_type; /* one of enum bpf_prog_type */ __u32 insn_cnt; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 08b0b6e40454..d604ddbb1afb 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -219,10 +219,8 @@ static int bpf_map_copy_value(struct bpf_map *map, void *key, void *value, void *ptr; int err; - if (bpf_map_is_dev_bound(map)) { - err = bpf_map_offload_lookup_elem(map, key, value); - return err; - } + if (bpf_map_is_dev_bound(map)) + return bpf_map_offload_lookup_elem(map, key, value); preempt_disable(); this_cpu_inc(bpf_prog_active); @@ -1220,6 +1218,109 @@ err_put: return err; } +#define MAP_LOOKUP_RETRIES 3 + +int generic_map_lookup_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + void __user *uobatch = u64_to_user_ptr(attr->batch.out_batch); + void __user *ubatch = u64_to_user_ptr(attr->batch.in_batch); + void __user *values = u64_to_user_ptr(attr->batch.values); + void __user *keys = u64_to_user_ptr(attr->batch.keys); + void *buf, *buf_prevkey, *prev_key, *key, *value; + int err, retry = MAP_LOOKUP_RETRIES; + u32 value_size, cp, max_count; + bool first_key = false; + + if (attr->batch.elem_flags & ~BPF_F_LOCK) + return -EINVAL; + + if ((attr->batch.elem_flags & BPF_F_LOCK) && + !map_value_has_spin_lock(map)) + return -EINVAL; + + value_size = bpf_map_value_size(map); + + max_count = attr->batch.count; + if (!max_count) + return 0; + + if (put_user(0, &uattr->batch.count)) + return -EFAULT; + + buf_prevkey = kmalloc(map->key_size, GFP_USER | __GFP_NOWARN); + if (!buf_prevkey) + return -ENOMEM; + + buf = kmalloc(map->key_size + value_size, GFP_USER | __GFP_NOWARN); + if (!buf) { + kvfree(buf_prevkey); + return -ENOMEM; + } + + err = -EFAULT; + first_key = false; + prev_key = NULL; + if (ubatch && copy_from_user(buf_prevkey, ubatch, map->key_size)) + goto free_buf; + key = buf; + value = key + map->key_size; + if (ubatch) + prev_key = buf_prevkey; + + for (cp = 0; cp < max_count;) { + rcu_read_lock(); + err = map->ops->map_get_next_key(map, prev_key, key); + rcu_read_unlock(); + if (err) + break; + err = bpf_map_copy_value(map, key, value, + attr->batch.elem_flags); + + if (err == -ENOENT) { + if (retry) { + retry--; + continue; + } + err = -EINTR; + break; + } + + if (err) + goto free_buf; + + if (copy_to_user(keys + cp * map->key_size, key, + map->key_size)) { + err = -EFAULT; + goto free_buf; + } + if (copy_to_user(values + cp * value_size, value, value_size)) { + err = -EFAULT; + goto free_buf; + } + + if (!prev_key) + prev_key = buf_prevkey; + + swap(prev_key, key); + retry = MAP_LOOKUP_RETRIES; + cp++; + } + + if (err == -EFAULT) + goto free_buf; + + if ((copy_to_user(&uattr->batch.count, &cp, sizeof(cp)) || + (cp && copy_to_user(uobatch, prev_key, map->key_size)))) + err = -EFAULT; + +free_buf: + kfree(buf_prevkey); + kfree(buf); + return err; +} + #define BPF_MAP_LOOKUP_AND_DELETE_ELEM_LAST_FIELD value static int map_lookup_and_delete_elem(union bpf_attr *attr) @@ -3076,6 +3177,54 @@ out: return err; } +#define BPF_MAP_BATCH_LAST_FIELD batch.flags + +#define BPF_DO_BATCH(fn) \ + do { \ + if (!fn) { \ + err = -ENOTSUPP; \ + goto err_put; \ + } \ + err = fn(map, attr, uattr); \ + } while (0) + +static int bpf_map_do_batch(const union bpf_attr *attr, + union bpf_attr __user *uattr, + int cmd) +{ + struct bpf_map *map; + int err, ufd; + struct fd f; + + if (CHECK_ATTR(BPF_MAP_BATCH)) + return -EINVAL; + + ufd = attr->batch.map_fd; + f = fdget(ufd); + map = __bpf_map_get(f); + if (IS_ERR(map)) + return PTR_ERR(map); + + if (cmd == BPF_MAP_LOOKUP_BATCH && + !(map_get_sys_perms(map, f) & FMODE_CAN_READ)) { + err = -EPERM; + goto err_put; + } + + if (cmd != BPF_MAP_LOOKUP_BATCH && + !(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) { + err = -EPERM; + goto err_put; + } + + if (cmd == BPF_MAP_LOOKUP_BATCH) + BPF_DO_BATCH(map->ops->map_lookup_batch); + +err_put: + fdput(f); + return err; +} + SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size) { union bpf_attr attr = {}; @@ -3173,6 +3322,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz case BPF_MAP_LOOKUP_AND_DELETE_ELEM: err = map_lookup_and_delete_elem(&attr); break; + case BPF_MAP_LOOKUP_BATCH: + err = bpf_map_do_batch(&attr, uattr, BPF_MAP_LOOKUP_BATCH); + break; default: err = -EINVAL; break; -- cgit v1.2.3-71-gd317 From aa2e93b8e58e18442edfb2427446732415bc215e Mon Sep 17 00:00:00 2001 From: Brian Vazquez Date: Wed, 15 Jan 2020 10:43:02 -0800 Subject: bpf: Add generic support for update and delete batch ops This commit adds generic support for update and delete batch ops that can be used for almost all the bpf maps. These commands share the same UAPI attr that lookup and lookup_and_delete batch ops use and the syscall commands are: BPF_MAP_UPDATE_BATCH BPF_MAP_DELETE_BATCH The main difference between update/delete and lookup batch ops is that for update/delete keys/values must be specified for userspace and because of that, neither in_batch nor out_batch are used. Suggested-by: Stanislav Fomichev Signed-off-by: Brian Vazquez Signed-off-by: Yonghong Song Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200115184308.162644-4-brianvv@google.com --- include/linux/bpf.h | 10 +++++ include/uapi/linux/bpf.h | 2 + kernel/bpf/syscall.c | 115 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 127 insertions(+) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 807744ecaa5a..05466ad6cf1c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -46,6 +46,10 @@ struct bpf_map_ops { void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key); int (*map_lookup_batch)(struct bpf_map *map, const union bpf_attr *attr, union bpf_attr __user *uattr); + int (*map_update_batch)(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr); + int (*map_delete_batch)(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr); /* funcs callable from userspace and from eBPF programs */ void *(*map_lookup_elem)(struct bpf_map *map, void *key); @@ -987,6 +991,12 @@ void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr); int generic_map_lookup_batch(struct bpf_map *map, const union bpf_attr *attr, union bpf_attr __user *uattr); +int generic_map_update_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr); +int generic_map_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr); extern int sysctl_unprivileged_bpf_disabled; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 0721b2c6bb8c..d5320b0be459 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -108,6 +108,8 @@ enum bpf_cmd { BPF_MAP_FREEZE, BPF_BTF_GET_NEXT_ID, BPF_MAP_LOOKUP_BATCH, + BPF_MAP_UPDATE_BATCH, + BPF_MAP_DELETE_BATCH, }; enum bpf_map_type { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index d604ddbb1afb..ce8244d1ba99 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1218,6 +1218,111 @@ err_put: return err; } +int generic_map_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + void __user *keys = u64_to_user_ptr(attr->batch.keys); + u32 cp, max_count; + int err = 0; + void *key; + + if (attr->batch.elem_flags & ~BPF_F_LOCK) + return -EINVAL; + + if ((attr->batch.elem_flags & BPF_F_LOCK) && + !map_value_has_spin_lock(map)) { + return -EINVAL; + } + + max_count = attr->batch.count; + if (!max_count) + return 0; + + for (cp = 0; cp < max_count; cp++) { + key = __bpf_copy_key(keys + cp * map->key_size, map->key_size); + if (IS_ERR(key)) { + err = PTR_ERR(key); + break; + } + + if (bpf_map_is_dev_bound(map)) { + err = bpf_map_offload_delete_elem(map, key); + break; + } + + preempt_disable(); + __this_cpu_inc(bpf_prog_active); + rcu_read_lock(); + err = map->ops->map_delete_elem(map, key); + rcu_read_unlock(); + __this_cpu_dec(bpf_prog_active); + preempt_enable(); + maybe_wait_bpf_programs(map); + if (err) + break; + } + if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp))) + err = -EFAULT; + return err; +} + +int generic_map_update_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + void __user *values = u64_to_user_ptr(attr->batch.values); + void __user *keys = u64_to_user_ptr(attr->batch.keys); + u32 value_size, cp, max_count; + int ufd = attr->map_fd; + void *key, *value; + struct fd f; + int err = 0; + + f = fdget(ufd); + if (attr->batch.elem_flags & ~BPF_F_LOCK) + return -EINVAL; + + if ((attr->batch.elem_flags & BPF_F_LOCK) && + !map_value_has_spin_lock(map)) { + return -EINVAL; + } + + value_size = bpf_map_value_size(map); + + max_count = attr->batch.count; + if (!max_count) + return 0; + + value = kmalloc(value_size, GFP_USER | __GFP_NOWARN); + if (!value) + return -ENOMEM; + + for (cp = 0; cp < max_count; cp++) { + key = __bpf_copy_key(keys + cp * map->key_size, map->key_size); + if (IS_ERR(key)) { + err = PTR_ERR(key); + break; + } + err = -EFAULT; + if (copy_from_user(value, values + cp * value_size, value_size)) + break; + + err = bpf_map_update_value(map, f, key, value, + attr->batch.elem_flags); + + if (err) + break; + } + + if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp))) + err = -EFAULT; + + kfree(value); + kfree(key); + return err; +} + #define MAP_LOOKUP_RETRIES 3 int generic_map_lookup_batch(struct bpf_map *map, @@ -3219,6 +3324,10 @@ static int bpf_map_do_batch(const union bpf_attr *attr, if (cmd == BPF_MAP_LOOKUP_BATCH) BPF_DO_BATCH(map->ops->map_lookup_batch); + else if (cmd == BPF_MAP_UPDATE_BATCH) + BPF_DO_BATCH(map->ops->map_update_batch); + else + BPF_DO_BATCH(map->ops->map_delete_batch); err_put: fdput(f); @@ -3325,6 +3434,12 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz case BPF_MAP_LOOKUP_BATCH: err = bpf_map_do_batch(&attr, uattr, BPF_MAP_LOOKUP_BATCH); break; + case BPF_MAP_UPDATE_BATCH: + err = bpf_map_do_batch(&attr, uattr, BPF_MAP_UPDATE_BATCH); + break; + case BPF_MAP_DELETE_BATCH: + err = bpf_map_do_batch(&attr, uattr, BPF_MAP_DELETE_BATCH); + break; default: err = -EINVAL; break; -- cgit v1.2.3-71-gd317 From 057996380a42bb64ccc04383cfa9c0ace4ea11f0 Mon Sep 17 00:00:00 2001 From: Yonghong Song Date: Wed, 15 Jan 2020 10:43:04 -0800 Subject: bpf: Add batch ops to all htab bpf map htab can't use generic batch support due some problematic behaviours inherent to the data structre, i.e. while iterating the bpf map a concurrent program might delete the next entry that batch was about to use, in that case there's no easy solution to retrieve the next entry, the issue has been discussed multiple times (see [1] and [2]). The only way hmap can be traversed without the problem previously exposed is by making sure that the map is traversing entire buckets. This commit implements those strict requirements for hmap, the implementation follows the same interaction that generic support with some exceptions: - If keys/values buffer are not big enough to traverse a bucket, ENOSPC will be returned. - out_batch contains the value of the next bucket in the iteration, not the next key, but this is transparent for the user since the user should never use out_batch for other than bpf batch syscalls. This commits implements BPF_MAP_LOOKUP_BATCH and adds support for new command BPF_MAP_LOOKUP_AND_DELETE_BATCH. Note that for update/delete batch ops it is possible to use the generic implementations. [1] https://lore.kernel.org/bpf/20190724165803.87470-1-brianvv@google.com/ [2] https://lore.kernel.org/bpf/20190906225434.3635421-1-yhs@fb.com/ Signed-off-by: Yonghong Song Signed-off-by: Brian Vazquez Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200115184308.162644-6-brianvv@google.com --- include/linux/bpf.h | 3 + include/uapi/linux/bpf.h | 1 + kernel/bpf/hashtab.c | 264 +++++++++++++++++++++++++++++++++++++++++++++++ kernel/bpf/syscall.c | 9 +- 4 files changed, 276 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 05466ad6cf1c..3517e32149a4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -46,6 +46,9 @@ struct bpf_map_ops { void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key); int (*map_lookup_batch)(struct bpf_map *map, const union bpf_attr *attr, union bpf_attr __user *uattr); + int (*map_lookup_and_delete_batch)(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr); int (*map_update_batch)(struct bpf_map *map, const union bpf_attr *attr, union bpf_attr __user *uattr); int (*map_delete_batch)(struct bpf_map *map, const union bpf_attr *attr, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index d5320b0be459..033d90a2282d 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -108,6 +108,7 @@ enum bpf_cmd { BPF_MAP_FREEZE, BPF_BTF_GET_NEXT_ID, BPF_MAP_LOOKUP_BATCH, + BPF_MAP_LOOKUP_AND_DELETE_BATCH, BPF_MAP_UPDATE_BATCH, BPF_MAP_DELETE_BATCH, }; diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 22066a62c8c9..2d182c4ee9d9 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -17,6 +17,16 @@ (BPF_F_NO_PREALLOC | BPF_F_NO_COMMON_LRU | BPF_F_NUMA_NODE | \ BPF_F_ACCESS_MASK | BPF_F_ZERO_SEED) +#define BATCH_OPS(_name) \ + .map_lookup_batch = \ + _name##_map_lookup_batch, \ + .map_lookup_and_delete_batch = \ + _name##_map_lookup_and_delete_batch, \ + .map_update_batch = \ + generic_map_update_batch, \ + .map_delete_batch = \ + generic_map_delete_batch + struct bucket { struct hlist_nulls_head head; raw_spinlock_t lock; @@ -1232,6 +1242,256 @@ static void htab_map_seq_show_elem(struct bpf_map *map, void *key, rcu_read_unlock(); } +static int +__htab_map_lookup_and_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr, + bool do_delete, bool is_lru_map, + bool is_percpu) +{ + struct bpf_htab *htab = container_of(map, struct bpf_htab, map); + u32 bucket_cnt, total, key_size, value_size, roundup_key_size; + void *keys = NULL, *values = NULL, *value, *dst_key, *dst_val; + void __user *uvalues = u64_to_user_ptr(attr->batch.values); + void __user *ukeys = u64_to_user_ptr(attr->batch.keys); + void *ubatch = u64_to_user_ptr(attr->batch.in_batch); + u32 batch, max_count, size, bucket_size; + u64 elem_map_flags, map_flags; + struct hlist_nulls_head *head; + struct hlist_nulls_node *n; + unsigned long flags; + struct htab_elem *l; + struct bucket *b; + int ret = 0; + + elem_map_flags = attr->batch.elem_flags; + if ((elem_map_flags & ~BPF_F_LOCK) || + ((elem_map_flags & BPF_F_LOCK) && !map_value_has_spin_lock(map))) + return -EINVAL; + + map_flags = attr->batch.flags; + if (map_flags) + return -EINVAL; + + max_count = attr->batch.count; + if (!max_count) + return 0; + + if (put_user(0, &uattr->batch.count)) + return -EFAULT; + + batch = 0; + if (ubatch && copy_from_user(&batch, ubatch, sizeof(batch))) + return -EFAULT; + + if (batch >= htab->n_buckets) + return -ENOENT; + + key_size = htab->map.key_size; + roundup_key_size = round_up(htab->map.key_size, 8); + value_size = htab->map.value_size; + size = round_up(value_size, 8); + if (is_percpu) + value_size = size * num_possible_cpus(); + total = 0; + /* while experimenting with hash tables with sizes ranging from 10 to + * 1000, it was observed that a bucket can have upto 5 entries. + */ + bucket_size = 5; + +alloc: + /* We cannot do copy_from_user or copy_to_user inside + * the rcu_read_lock. Allocate enough space here. + */ + keys = kvmalloc(key_size * bucket_size, GFP_USER | __GFP_NOWARN); + values = kvmalloc(value_size * bucket_size, GFP_USER | __GFP_NOWARN); + if (!keys || !values) { + ret = -ENOMEM; + goto after_loop; + } + +again: + preempt_disable(); + this_cpu_inc(bpf_prog_active); + rcu_read_lock(); +again_nocopy: + dst_key = keys; + dst_val = values; + b = &htab->buckets[batch]; + head = &b->head; + raw_spin_lock_irqsave(&b->lock, flags); + + bucket_cnt = 0; + hlist_nulls_for_each_entry_rcu(l, n, head, hash_node) + bucket_cnt++; + + if (bucket_cnt > (max_count - total)) { + if (total == 0) + ret = -ENOSPC; + raw_spin_unlock_irqrestore(&b->lock, flags); + rcu_read_unlock(); + this_cpu_dec(bpf_prog_active); + preempt_enable(); + goto after_loop; + } + + if (bucket_cnt > bucket_size) { + bucket_size = bucket_cnt; + raw_spin_unlock_irqrestore(&b->lock, flags); + rcu_read_unlock(); + this_cpu_dec(bpf_prog_active); + preempt_enable(); + kvfree(keys); + kvfree(values); + goto alloc; + } + + hlist_nulls_for_each_entry_safe(l, n, head, hash_node) { + memcpy(dst_key, l->key, key_size); + + if (is_percpu) { + int off = 0, cpu; + void __percpu *pptr; + + pptr = htab_elem_get_ptr(l, map->key_size); + for_each_possible_cpu(cpu) { + bpf_long_memcpy(dst_val + off, + per_cpu_ptr(pptr, cpu), size); + off += size; + } + } else { + value = l->key + roundup_key_size; + if (elem_map_flags & BPF_F_LOCK) + copy_map_value_locked(map, dst_val, value, + true); + else + copy_map_value(map, dst_val, value); + check_and_init_map_lock(map, dst_val); + } + if (do_delete) { + hlist_nulls_del_rcu(&l->hash_node); + if (is_lru_map) + bpf_lru_push_free(&htab->lru, &l->lru_node); + else + free_htab_elem(htab, l); + } + dst_key += key_size; + dst_val += value_size; + } + + raw_spin_unlock_irqrestore(&b->lock, flags); + /* If we are not copying data, we can go to next bucket and avoid + * unlocking the rcu. + */ + if (!bucket_cnt && (batch + 1 < htab->n_buckets)) { + batch++; + goto again_nocopy; + } + + rcu_read_unlock(); + this_cpu_dec(bpf_prog_active); + preempt_enable(); + if (bucket_cnt && (copy_to_user(ukeys + total * key_size, keys, + key_size * bucket_cnt) || + copy_to_user(uvalues + total * value_size, values, + value_size * bucket_cnt))) { + ret = -EFAULT; + goto after_loop; + } + + total += bucket_cnt; + batch++; + if (batch >= htab->n_buckets) { + ret = -ENOENT; + goto after_loop; + } + goto again; + +after_loop: + if (ret == -EFAULT) + goto out; + + /* copy # of entries and next batch */ + ubatch = u64_to_user_ptr(attr->batch.out_batch); + if (copy_to_user(ubatch, &batch, sizeof(batch)) || + put_user(total, &uattr->batch.count)) + ret = -EFAULT; + +out: + kvfree(keys); + kvfree(values); + return ret; +} + +static int +htab_percpu_map_lookup_batch(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, false, + false, true); +} + +static int +htab_percpu_map_lookup_and_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, true, + false, true); +} + +static int +htab_map_lookup_batch(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, false, + false, false); +} + +static int +htab_map_lookup_and_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, true, + false, false); +} + +static int +htab_lru_percpu_map_lookup_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, false, + true, true); +} + +static int +htab_lru_percpu_map_lookup_and_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, true, + true, true); +} + +static int +htab_lru_map_lookup_batch(struct bpf_map *map, const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, false, + true, false); +} + +static int +htab_lru_map_lookup_and_delete_batch(struct bpf_map *map, + const union bpf_attr *attr, + union bpf_attr __user *uattr) +{ + return __htab_map_lookup_and_delete_batch(map, attr, uattr, true, + true, false); +} + const struct bpf_map_ops htab_map_ops = { .map_alloc_check = htab_map_alloc_check, .map_alloc = htab_map_alloc, @@ -1242,6 +1502,7 @@ const struct bpf_map_ops htab_map_ops = { .map_delete_elem = htab_map_delete_elem, .map_gen_lookup = htab_map_gen_lookup, .map_seq_show_elem = htab_map_seq_show_elem, + BATCH_OPS(htab), }; const struct bpf_map_ops htab_lru_map_ops = { @@ -1255,6 +1516,7 @@ const struct bpf_map_ops htab_lru_map_ops = { .map_delete_elem = htab_lru_map_delete_elem, .map_gen_lookup = htab_lru_map_gen_lookup, .map_seq_show_elem = htab_map_seq_show_elem, + BATCH_OPS(htab_lru), }; /* Called from eBPF program */ @@ -1368,6 +1630,7 @@ const struct bpf_map_ops htab_percpu_map_ops = { .map_update_elem = htab_percpu_map_update_elem, .map_delete_elem = htab_map_delete_elem, .map_seq_show_elem = htab_percpu_map_seq_show_elem, + BATCH_OPS(htab_percpu), }; const struct bpf_map_ops htab_lru_percpu_map_ops = { @@ -1379,6 +1642,7 @@ const struct bpf_map_ops htab_lru_percpu_map_ops = { .map_update_elem = htab_lru_percpu_map_update_elem, .map_delete_elem = htab_lru_map_delete_elem, .map_seq_show_elem = htab_percpu_map_seq_show_elem, + BATCH_OPS(htab_lru_percpu), }; static int fd_htab_map_alloc_check(union bpf_attr *attr) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index ce8244d1ba99..0d94d361379c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3310,7 +3310,8 @@ static int bpf_map_do_batch(const union bpf_attr *attr, if (IS_ERR(map)) return PTR_ERR(map); - if (cmd == BPF_MAP_LOOKUP_BATCH && + if ((cmd == BPF_MAP_LOOKUP_BATCH || + cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH) && !(map_get_sys_perms(map, f) & FMODE_CAN_READ)) { err = -EPERM; goto err_put; @@ -3324,6 +3325,8 @@ static int bpf_map_do_batch(const union bpf_attr *attr, if (cmd == BPF_MAP_LOOKUP_BATCH) BPF_DO_BATCH(map->ops->map_lookup_batch); + else if (cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH) + BPF_DO_BATCH(map->ops->map_lookup_and_delete_batch); else if (cmd == BPF_MAP_UPDATE_BATCH) BPF_DO_BATCH(map->ops->map_update_batch); else @@ -3434,6 +3437,10 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz case BPF_MAP_LOOKUP_BATCH: err = bpf_map_do_batch(&attr, uattr, BPF_MAP_LOOKUP_BATCH); break; + case BPF_MAP_LOOKUP_AND_DELETE_BATCH: + err = bpf_map_do_batch(&attr, uattr, + BPF_MAP_LOOKUP_AND_DELETE_BATCH); + break; case BPF_MAP_UPDATE_BATCH: err = bpf_map_do_batch(&attr, uattr, BPF_MAP_UPDATE_BATCH); break; -- cgit v1.2.3-71-gd317 From c320e527e1548305f31d95ec405140b04aed25f5 Mon Sep 17 00:00:00 2001 From: Moni Shoua Date: Wed, 15 Jan 2020 14:43:31 +0200 Subject: IB: Allow calls to ib_umem_get from kernel ULPs So far the assumption was that ib_umem_get() and ib_umem_odp_get() are called from flows that start in UVERBS and therefore has a user context. This assumption restricts flows that are initiated by ULPs and need the service that ib_umem_get() provides. This patch changes ib_umem_get() and ib_umem_odp_get() to get IB device directly by relying on the fact that both UVERBS and ULPs sets that field correctly. Reviewed-by: Guy Levi Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem.c | 27 +++++++++---------------- drivers/infiniband/core/umem_odp.c | 29 +++++++-------------------- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 12 ++++++----- drivers/infiniband/hw/cxgb4/mem.c | 2 +- drivers/infiniband/hw/efa/efa_verbs.c | 2 +- drivers/infiniband/hw/hns/hns_roce_cq.c | 2 +- drivers/infiniband/hw/hns/hns_roce_db.c | 3 ++- drivers/infiniband/hw/hns/hns_roce_mr.c | 4 ++-- drivers/infiniband/hw/hns/hns_roce_qp.c | 2 +- drivers/infiniband/hw/hns/hns_roce_srq.c | 5 +++-- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 2 +- drivers/infiniband/hw/mlx4/cq.c | 2 +- drivers/infiniband/hw/mlx4/doorbell.c | 3 ++- drivers/infiniband/hw/mlx4/mr.c | 8 ++++---- drivers/infiniband/hw/mlx4/qp.c | 5 +++-- drivers/infiniband/hw/mlx4/srq.c | 3 ++- drivers/infiniband/hw/mlx5/cq.c | 6 +++--- drivers/infiniband/hw/mlx5/devx.c | 2 +- drivers/infiniband/hw/mlx5/doorbell.c | 3 ++- drivers/infiniband/hw/mlx5/mr.c | 18 ++++++++--------- drivers/infiniband/hw/mlx5/odp.c | 2 +- drivers/infiniband/hw/mlx5/qp.c | 4 ++-- drivers/infiniband/hw/mlx5/srq.c | 2 +- drivers/infiniband/hw/mthca/mthca_provider.c | 2 +- drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 2 +- drivers/infiniband/hw/qedr/verbs.c | 9 ++++----- drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 7 ++++--- drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 2 +- drivers/infiniband/sw/rdmavt/mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- include/rdma/ib_umem.h | 4 ++-- include/rdma/ib_umem_odp.h | 6 +++--- 34 files changed, 85 insertions(+), 103 deletions(-) (limited to 'include') diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 7a3b99597ead..146f98fbf22b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -181,15 +181,14 @@ EXPORT_SYMBOL(ib_umem_find_best_pgsz); /** * ib_umem_get - Pin and DMA map userspace memory. * - * @udata: userspace context to pin memory for + * @device: IB device to connect UMEM * @addr: userspace virtual address to start at * @size: length of region to pin * @access: IB_ACCESS_xxx flags for memory being pinned */ -struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, +struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, size_t size, int access) { - struct ib_ucontext *context; struct ib_umem *umem; struct page **page_list; unsigned long lock_limit; @@ -201,14 +200,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, struct scatterlist *sg; unsigned int gup_flags = FOLL_WRITE; - if (!udata) - return ERR_PTR(-EIO); - - context = container_of(udata, struct uverbs_attr_bundle, driver_udata) - ->context; - if (!context) - return ERR_PTR(-EIO); - /* * If the combination of the addr and size requested for this memory * region causes an integer overflow, return error. @@ -226,7 +217,7 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, umem = kzalloc(sizeof(*umem), GFP_KERNEL); if (!umem) return ERR_PTR(-ENOMEM); - umem->ibdev = context->device; + umem->ibdev = device; umem->length = size; umem->address = addr; umem->writable = ib_access_writable(access); @@ -281,7 +272,7 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, npages -= ret; sg = ib_umem_add_sg_table(sg, page_list, ret, - dma_get_max_seg_size(context->device->dma_device), + dma_get_max_seg_size(device->dma_device), &umem->sg_nents); up_read(&mm->mmap_sem); @@ -289,10 +280,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, sg_mark_end(sg); - umem->nmap = ib_dma_map_sg(context->device, - umem->sg_head.sgl, - umem->sg_nents, - DMA_BIDIRECTIONAL); + umem->nmap = ib_dma_map_sg(device, + umem->sg_head.sgl, + umem->sg_nents, + DMA_BIDIRECTIONAL); if (!umem->nmap) { ret = -ENOMEM; @@ -303,7 +294,7 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, goto out; umem_release: - __ib_umem_release(context->device, umem, 0); + __ib_umem_release(device, umem, 0); vma: atomic64_sub(ib_umem_num_pages(umem), &mm->pinned_vm); out: diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e42d44e501fd..dac3fd2ebc26 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -110,15 +110,12 @@ out_page_list: * They exist only to hold the per_mm reference to help the driver create * children umems. * - * @udata: udata from the syscall being used to create the umem + * @device: IB device to create UMEM * @access: ib_reg_mr access flags */ -struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata, +struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, int access) { - struct ib_ucontext *context = - container_of(udata, struct uverbs_attr_bundle, driver_udata) - ->context; struct ib_umem *umem; struct ib_umem_odp *umem_odp; int ret; @@ -126,14 +123,11 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata, if (access & IB_ACCESS_HUGETLB) return ERR_PTR(-EINVAL); - if (!context) - return ERR_PTR(-EIO); - umem_odp = kzalloc(sizeof(*umem_odp), GFP_KERNEL); if (!umem_odp) return ERR_PTR(-ENOMEM); umem = &umem_odp->umem; - umem->ibdev = context->device; + umem->ibdev = device; umem->writable = ib_access_writable(access); umem->owning_mm = current->mm; umem_odp->is_implicit_odp = 1; @@ -201,7 +195,7 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_child); /** * ib_umem_odp_get - Create a umem_odp for a userspace va * - * @udata: userspace context to pin memory for + * @device: IB device struct to get UMEM * @addr: userspace virtual address to start at * @size: length of region to pin * @access: IB_ACCESS_xxx flags for memory being pinned @@ -210,23 +204,14 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_child); * pinning, instead, stores the mm for future page fault handling in * conjunction with MMU notifiers. */ -struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, - size_t size, int access, +struct ib_umem_odp *ib_umem_odp_get(struct ib_device *device, + unsigned long addr, size_t size, int access, const struct mmu_interval_notifier_ops *ops) { struct ib_umem_odp *umem_odp; - struct ib_ucontext *context; struct mm_struct *mm; int ret; - if (!udata) - return ERR_PTR(-EIO); - - context = container_of(udata, struct uverbs_attr_bundle, driver_udata) - ->context; - if (!context) - return ERR_PTR(-EIO); - if (WARN_ON_ONCE(!(access & IB_ACCESS_ON_DEMAND))) return ERR_PTR(-EINVAL); @@ -234,7 +219,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, if (!umem_odp) return ERR_PTR(-ENOMEM); - umem_odp->umem.ibdev = context->device; + umem_odp->umem.ibdev = device; umem_odp->umem.length = size; umem_odp->umem.address = addr; umem_odp->umem.writable = ib_access_writable(access); diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index ad5112a2325f..52b6a4d85460 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -837,7 +837,8 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd, bytes += (qplib_qp->sq.max_wqe * psn_sz); } bytes = PAGE_ALIGN(bytes); - umem = ib_umem_get(udata, ureq.qpsva, bytes, IB_ACCESS_LOCAL_WRITE); + umem = ib_umem_get(&rdev->ibdev, ureq.qpsva, bytes, + IB_ACCESS_LOCAL_WRITE); if (IS_ERR(umem)) return PTR_ERR(umem); @@ -850,7 +851,7 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd, if (!qp->qplib_qp.srq) { bytes = (qplib_qp->rq.max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE); bytes = PAGE_ALIGN(bytes); - umem = ib_umem_get(udata, ureq.qprva, bytes, + umem = ib_umem_get(&rdev->ibdev, ureq.qprva, bytes, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(umem)) goto rqfail; @@ -1304,7 +1305,8 @@ static int bnxt_re_init_user_srq(struct bnxt_re_dev *rdev, bytes = (qplib_srq->max_wqe * BNXT_QPLIB_MAX_RQE_ENTRY_SIZE); bytes = PAGE_ALIGN(bytes); - umem = ib_umem_get(udata, ureq.srqva, bytes, IB_ACCESS_LOCAL_WRITE); + umem = ib_umem_get(&rdev->ibdev, ureq.srqva, bytes, + IB_ACCESS_LOCAL_WRITE); if (IS_ERR(umem)) return PTR_ERR(umem); @@ -2545,7 +2547,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, goto fail; } - cq->umem = ib_umem_get(udata, req.cq_va, + cq->umem = ib_umem_get(&rdev->ibdev, req.cq_va, entries * sizeof(struct cq_base), IB_ACCESS_LOCAL_WRITE); if (IS_ERR(cq->umem)) { @@ -3514,7 +3516,7 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, /* The fixed portion of the rkey is the same as the lkey */ mr->ib_mr.rkey = mr->qplib_mr.rkey; - umem = ib_umem_get(udata, start, length, mr_access_flags); + umem = ib_umem_get(&rdev->ibdev, start, length, mr_access_flags); if (IS_ERR(umem)) { dev_err(rdev_to_dev(rdev), "Failed to get umem"); rc = -EFAULT; diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index fe3a7e8561df..962dc97a8ff2 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -543,7 +543,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mhp->rhp = rhp; - mhp->umem = ib_umem_get(udata, start, length, acc); + mhp->umem = ib_umem_get(pd->device, start, length, acc); if (IS_ERR(mhp->umem)) goto err_free_skb; diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c index 50c22575aed6..5c35aa72f515 100644 --- a/drivers/infiniband/hw/efa/efa_verbs.c +++ b/drivers/infiniband/hw/efa/efa_verbs.c @@ -1384,7 +1384,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, goto err_out; } - mr->umem = ib_umem_get(udata, start, length, access_flags); + mr->umem = ib_umem_get(ibpd->device, start, length, access_flags); if (IS_ERR(mr->umem)) { err = PTR_ERR(mr->umem); ibdev_dbg(&dev->ibdev, diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c index af1d8823b3f0..a2d1e5331bf1 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cq.c +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c @@ -163,7 +163,7 @@ static int get_cq_umem(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq, u32 npages; int ret; - *umem = ib_umem_get(udata, ucmd.buf_addr, buf->size, + *umem = ib_umem_get(&hr_dev->ib_dev, ucmd.buf_addr, buf->size, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(*umem)) return PTR_ERR(*umem); diff --git a/drivers/infiniband/hw/hns/hns_roce_db.c b/drivers/infiniband/hw/hns/hns_roce_db.c index 10af6958ab69..bff6abdccfb0 100644 --- a/drivers/infiniband/hw/hns/hns_roce_db.c +++ b/drivers/infiniband/hw/hns/hns_roce_db.c @@ -31,7 +31,8 @@ int hns_roce_db_map_user(struct hns_roce_ucontext *context, refcount_set(&page->refcount, 1); page->user_virt = page_addr; - page->umem = ib_umem_get(udata, page_addr, PAGE_SIZE, 0); + page->umem = ib_umem_get(context->ibucontext.device, page_addr, + PAGE_SIZE, 0); if (IS_ERR(page->umem)) { ret = PTR_ERR(page->umem); kfree(page); diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 9ad19170c3f9..3ff610549c74 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -1145,7 +1145,7 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (!mr) return ERR_PTR(-ENOMEM); - mr->umem = ib_umem_get(udata, start, length, access_flags); + mr->umem = ib_umem_get(pd->device, start, length, access_flags); if (IS_ERR(mr->umem)) { ret = PTR_ERR(mr->umem); goto err_free; @@ -1230,7 +1230,7 @@ static int rereg_mr_trans(struct ib_mr *ibmr, int flags, } ib_umem_release(mr->umem); - mr->umem = ib_umem_get(udata, start, length, mr_access_flags); + mr->umem = ib_umem_get(ibmr->device, start, length, mr_access_flags); if (IS_ERR(mr->umem)) { ret = PTR_ERR(mr->umem); mr->umem = NULL; diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index a6565b674801..eb2ee6a581aa 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -744,7 +744,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, goto err_alloc_rq_inline_buf; } - hr_qp->umem = ib_umem_get(udata, ucmd.buf_addr, + hr_qp->umem = ib_umem_get(ib_pd->device, ucmd.buf_addr, hr_qp->buff_size, 0); if (IS_ERR(hr_qp->umem)) { dev_err(dev, "ib_umem_get error for create qp\n"); diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c index 7113ebfdb4f0..c6d5f06f9cde 100644 --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c @@ -186,7 +186,8 @@ static int create_user_srq(struct hns_roce_srq *srq, struct ib_udata *udata, if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) return -EFAULT; - srq->umem = ib_umem_get(udata, ucmd.buf_addr, srq_buf_size, 0); + srq->umem = + ib_umem_get(srq->ibsrq.device, ucmd.buf_addr, srq_buf_size, 0); if (IS_ERR(srq->umem)) return PTR_ERR(srq->umem); @@ -205,7 +206,7 @@ static int create_user_srq(struct hns_roce_srq *srq, struct ib_udata *udata, goto err_user_srq_mtt; /* config index queue BA */ - srq->idx_que.umem = ib_umem_get(udata, ucmd.que_addr, + srq->idx_que.umem = ib_umem_get(srq->ibsrq.device, ucmd.que_addr, srq->idx_que.buf_size, 0); if (IS_ERR(srq->idx_que.umem)) { dev_err(hr_dev->dev, "ib_umem_get error for index queue\n"); diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index dbd96d029d8b..96488fb443eb 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -1761,7 +1761,7 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd, if (length > I40IW_MAX_MR_SIZE) return ERR_PTR(-EINVAL); - region = ib_umem_get(udata, start, length, acc); + region = ib_umem_get(pd->device, start, length, acc); if (IS_ERR(region)) return (struct ib_mr *)region; diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c index 306b21281fa2..a57033d4b0e5 100644 --- a/drivers/infiniband/hw/mlx4/cq.c +++ b/drivers/infiniband/hw/mlx4/cq.c @@ -144,7 +144,7 @@ static int mlx4_ib_get_cq_umem(struct mlx4_ib_dev *dev, struct ib_udata *udata, int shift; int n; - *umem = ib_umem_get(udata, buf_addr, cqe * cqe_size, + *umem = ib_umem_get(&dev->ib_dev, buf_addr, cqe * cqe_size, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(*umem)) return PTR_ERR(*umem); diff --git a/drivers/infiniband/hw/mlx4/doorbell.c b/drivers/infiniband/hw/mlx4/doorbell.c index 714f9df5bf39..d41f03ccb0e1 100644 --- a/drivers/infiniband/hw/mlx4/doorbell.c +++ b/drivers/infiniband/hw/mlx4/doorbell.c @@ -64,7 +64,8 @@ int mlx4_ib_db_map_user(struct ib_udata *udata, unsigned long virt, page->user_virt = (virt & PAGE_MASK); page->refcnt = 0; - page->umem = ib_umem_get(udata, virt & PAGE_MASK, PAGE_SIZE, 0); + page->umem = ib_umem_get(context->ibucontext.device, virt & PAGE_MASK, + PAGE_SIZE, 0); if (IS_ERR(page->umem)) { err = PTR_ERR(page->umem); kfree(page); diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index dfa17bcdcdbc..b0121c90c561 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -367,7 +367,7 @@ end: return block_shift; } -static struct ib_umem *mlx4_get_umem_mr(struct ib_udata *udata, u64 start, +static struct ib_umem *mlx4_get_umem_mr(struct ib_device *device, u64 start, u64 length, int access_flags) { /* @@ -398,7 +398,7 @@ static struct ib_umem *mlx4_get_umem_mr(struct ib_udata *udata, u64 start, up_read(¤t->mm->mmap_sem); } - return ib_umem_get(udata, start, length, access_flags); + return ib_umem_get(device, start, length, access_flags); } struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, @@ -415,7 +415,7 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (!mr) return ERR_PTR(-ENOMEM); - mr->umem = mlx4_get_umem_mr(udata, start, length, access_flags); + mr->umem = mlx4_get_umem_mr(pd->device, start, length, access_flags); if (IS_ERR(mr->umem)) { err = PTR_ERR(mr->umem); goto err_free; @@ -504,7 +504,7 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags, mlx4_mr_rereg_mem_cleanup(dev->dev, &mmr->mmr); ib_umem_release(mmr->umem); - mmr->umem = mlx4_get_umem_mr(udata, start, length, + mmr->umem = mlx4_get_umem_mr(mr->device, start, length, mr_access_flags); if (IS_ERR(mmr->umem)) { err = PTR_ERR(mmr->umem); diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 85f57b76e446..cb23cdb9389a 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -916,7 +916,7 @@ static int create_rq(struct ib_pd *pd, struct ib_qp_init_attr *init_attr, qp->buf_size = (qp->rq.wqe_cnt << qp->rq.wqe_shift) + (qp->sq.wqe_cnt << qp->sq.wqe_shift); - qp->umem = ib_umem_get(udata, wq.buf_addr, qp->buf_size, 0); + qp->umem = ib_umem_get(pd->device, wq.buf_addr, qp->buf_size, 0); if (IS_ERR(qp->umem)) { err = PTR_ERR(qp->umem); goto err; @@ -1110,7 +1110,8 @@ static int create_qp_common(struct ib_pd *pd, struct ib_qp_init_attr *init_attr, if (err) goto err; - qp->umem = ib_umem_get(udata, ucmd.buf_addr, qp->buf_size, 0); + qp->umem = + ib_umem_get(pd->device, ucmd.buf_addr, qp->buf_size, 0); if (IS_ERR(qp->umem)) { err = PTR_ERR(qp->umem); goto err; diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c index 8dcf6e3d9ae2..8f9d5035142d 100644 --- a/drivers/infiniband/hw/mlx4/srq.c +++ b/drivers/infiniband/hw/mlx4/srq.c @@ -110,7 +110,8 @@ int mlx4_ib_create_srq(struct ib_srq *ib_srq, if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) return -EFAULT; - srq->umem = ib_umem_get(udata, ucmd.buf_addr, buf_size, 0); + srq->umem = + ib_umem_get(ib_srq->device, ucmd.buf_addr, buf_size, 0); if (IS_ERR(srq->umem)) return PTR_ERR(srq->umem); diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c index dd8d24ee8e1d..367a71bc5f4b 100644 --- a/drivers/infiniband/hw/mlx5/cq.c +++ b/drivers/infiniband/hw/mlx5/cq.c @@ -708,8 +708,8 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata, *cqe_size = ucmd.cqe_size; cq->buf.umem = - ib_umem_get(udata, ucmd.buf_addr, entries * ucmd.cqe_size, - IB_ACCESS_LOCAL_WRITE); + ib_umem_get(&dev->ib_dev, ucmd.buf_addr, + entries * ucmd.cqe_size, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(cq->buf.umem)) { err = PTR_ERR(cq->buf.umem); return err; @@ -1108,7 +1108,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq, if (ucmd.cqe_size && SIZE_MAX / ucmd.cqe_size <= entries - 1) return -EINVAL; - umem = ib_umem_get(udata, ucmd.buf_addr, + umem = ib_umem_get(&dev->ib_dev, ucmd.buf_addr, (size_t)ucmd.cqe_size * entries, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(umem)) { diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c index 9d0a18cf9e5e..685b8ed96b4e 100644 --- a/drivers/infiniband/hw/mlx5/devx.c +++ b/drivers/infiniband/hw/mlx5/devx.c @@ -2134,7 +2134,7 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext, if (err) return err; - obj->umem = ib_umem_get(&attrs->driver_udata, addr, size, access); + obj->umem = ib_umem_get(&dev->ib_dev, addr, size, access); if (IS_ERR(obj->umem)) return PTR_ERR(obj->umem); diff --git a/drivers/infiniband/hw/mlx5/doorbell.c b/drivers/infiniband/hw/mlx5/doorbell.c index 12737c509aa2..61475b571531 100644 --- a/drivers/infiniband/hw/mlx5/doorbell.c +++ b/drivers/infiniband/hw/mlx5/doorbell.c @@ -64,7 +64,8 @@ int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, page->user_virt = (virt & PAGE_MASK); page->refcnt = 0; - page->umem = ib_umem_get(udata, virt & PAGE_MASK, PAGE_SIZE, 0); + page->umem = ib_umem_get(context->ibucontext.device, virt & PAGE_MASK, + PAGE_SIZE, 0); if (IS_ERR(page->umem)) { err = PTR_ERR(page->umem); kfree(page); diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index ea8bfc3e2d8d..f79bb44b94fe 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -737,10 +737,9 @@ static int mr_cache_max_order(struct mlx5_ib_dev *dev) return MLX5_MAX_UMR_SHIFT; } -static int mr_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata, - u64 start, u64 length, int access_flags, - struct ib_umem **umem, int *npages, int *page_shift, - int *ncont, int *order) +static int mr_umem_get(struct mlx5_ib_dev *dev, u64 start, u64 length, + int access_flags, struct ib_umem **umem, int *npages, + int *page_shift, int *ncont, int *order) { struct ib_umem *u; @@ -749,7 +748,7 @@ static int mr_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata, if (access_flags & IB_ACCESS_ON_DEMAND) { struct ib_umem_odp *odp; - odp = ib_umem_odp_get(udata, start, length, access_flags, + odp = ib_umem_odp_get(&dev->ib_dev, start, length, access_flags, &mlx5_mn_ops); if (IS_ERR(odp)) { mlx5_ib_dbg(dev, "umem get failed (%ld)\n", @@ -765,7 +764,7 @@ static int mr_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata, if (order) *order = ilog2(roundup_pow_of_two(*ncont)); } else { - u = ib_umem_get(udata, start, length, access_flags); + u = ib_umem_get(&dev->ib_dev, start, length, access_flags); if (IS_ERR(u)) { mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(u)); return PTR_ERR(u); @@ -1257,7 +1256,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, return &mr->ibmr; } - err = mr_umem_get(dev, udata, start, length, access_flags, &umem, + err = mr_umem_get(dev, start, length, access_flags, &umem, &npages, &page_shift, &ncont, &order); if (err < 0) @@ -1424,9 +1423,8 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start, flags |= IB_MR_REREG_TRANS; ib_umem_release(mr->umem); mr->umem = NULL; - err = mr_umem_get(dev, udata, addr, len, access_flags, - &mr->umem, &npages, &page_shift, &ncont, - &order); + err = mr_umem_get(dev, addr, len, access_flags, &mr->umem, + &npages, &page_shift, &ncont, &order); if (err) goto err; } diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index f924250f80c2..3b3ceb5acdd3 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -497,7 +497,7 @@ struct mlx5_ib_mr *mlx5_ib_alloc_implicit_mr(struct mlx5_ib_pd *pd, struct mlx5_ib_mr *imr; int err; - umem_odp = ib_umem_odp_alloc_implicit(udata, access_flags); + umem_odp = ib_umem_odp_alloc_implicit(&dev->ib_dev, access_flags); if (IS_ERR(umem_odp)) return ERR_CAST(umem_odp); diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 7e51870e9e01..7f0bde313560 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -749,7 +749,7 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata, { int err; - *umem = ib_umem_get(udata, addr, size, 0); + *umem = ib_umem_get(&dev->ib_dev, addr, size, 0); if (IS_ERR(*umem)) { mlx5_ib_dbg(dev, "umem_get failed\n"); return PTR_ERR(*umem); @@ -806,7 +806,7 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd, if (!ucmd->buf_addr) return -EINVAL; - rwq->umem = ib_umem_get(udata, ucmd->buf_addr, rwq->buf_size, 0); + rwq->umem = ib_umem_get(&dev->ib_dev, ucmd->buf_addr, rwq->buf_size, 0); if (IS_ERR(rwq->umem)) { mlx5_ib_dbg(dev, "umem_get failed\n"); err = PTR_ERR(rwq->umem); diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c index 62939df3c692..b1a8a9175040 100644 --- a/drivers/infiniband/hw/mlx5/srq.c +++ b/drivers/infiniband/hw/mlx5/srq.c @@ -80,7 +80,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq, srq->wq_sig = !!(ucmd.flags & MLX5_SRQ_FLAG_SIGNATURE); - srq->umem = ib_umem_get(udata, ucmd.buf_addr, buf_size, 0); + srq->umem = ib_umem_get(pd->device, ucmd.buf_addr, buf_size, 0); if (IS_ERR(srq->umem)) { mlx5_ib_dbg(dev, "failed umem get, size %d\n", buf_size); err = PTR_ERR(srq->umem); diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index 33002530fee7..ac19d57803b5 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -880,7 +880,7 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (!mr) return ERR_PTR(-ENOMEM); - mr->umem = ib_umem_get(udata, start, length, acc); + mr->umem = ib_umem_get(pd->device, start, length, acc); if (IS_ERR(mr->umem)) { err = PTR_ERR(mr->umem); goto err; diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index 9bc1ca6f6f9e..d47ea675734b 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -869,7 +869,7 @@ struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len, mr = kzalloc(sizeof(*mr), GFP_KERNEL); if (!mr) return ERR_PTR(status); - mr->umem = ib_umem_get(udata, start, len, acc); + mr->umem = ib_umem_get(ibpd->device, start, len, acc); if (IS_ERR(mr->umem)) { status = -EFAULT; goto umem_err; diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 4cd292966aa9..920f35e28cfc 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -772,7 +772,7 @@ static inline int qedr_init_user_queue(struct ib_udata *udata, q->buf_addr = buf_addr; q->buf_len = buf_len; - q->umem = ib_umem_get(udata, q->buf_addr, q->buf_len, access); + q->umem = ib_umem_get(&dev->ibdev, q->buf_addr, q->buf_len, access); if (IS_ERR(q->umem)) { DP_ERR(dev, "create user queue: failed ib_umem_get, got %ld\n", PTR_ERR(q->umem)); @@ -1415,9 +1415,8 @@ static int qedr_init_srq_user_params(struct ib_udata *udata, if (rc) return rc; - srq->prod_umem = - ib_umem_get(udata, ureq->prod_pair_addr, - sizeof(struct rdma_srq_producers), access); + srq->prod_umem = ib_umem_get(srq->ibsrq.device, ureq->prod_pair_addr, + sizeof(struct rdma_srq_producers), access); if (IS_ERR(srq->prod_umem)) { qedr_free_pbl(srq->dev, &srq->usrq.pbl_info, srq->usrq.pbl_tbl); ib_umem_release(srq->usrq.umem); @@ -2839,7 +2838,7 @@ struct ib_mr *qedr_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len, mr->type = QEDR_MR_USER; - mr->umem = ib_umem_get(udata, start, len, acc); + mr->umem = ib_umem_get(ibpd->device, start, len, acc); if (IS_ERR(mr->umem)) { rc = -EFAULT; goto err0; diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c index a26a4fd86bf4..4f6cc0de7ef9 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c @@ -135,7 +135,7 @@ int pvrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, goto err_cq; } - cq->umem = ib_umem_get(udata, ucmd.buf_addr, ucmd.buf_size, + cq->umem = ib_umem_get(ibdev, ucmd.buf_addr, ucmd.buf_size, IB_ACCESS_LOCAL_WRITE); if (IS_ERR(cq->umem)) { ret = PTR_ERR(cq->umem); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c index c61e665ff261..b039f1f00e05 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c @@ -126,7 +126,7 @@ struct ib_mr *pvrdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, return ERR_PTR(-EINVAL); } - umem = ib_umem_get(udata, start, length, access_flags); + umem = ib_umem_get(pd->device, start, length, access_flags); if (IS_ERR(umem)) { dev_warn(&dev->pdev->dev, "could not get umem for mem region\n"); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index f15809c28f67..9de1281f9a3b 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -276,8 +276,9 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, if (!is_srq) { /* set qp->sq.wqe_cnt, shift, buf_size.. */ - qp->rumem = ib_umem_get(udata, ucmd.rbuf_addr, - ucmd.rbuf_size, 0); + qp->rumem = + ib_umem_get(pd->device, ucmd.rbuf_addr, + ucmd.rbuf_size, 0); if (IS_ERR(qp->rumem)) { ret = PTR_ERR(qp->rumem); goto err_qp; @@ -288,7 +289,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, qp->srq = to_vsrq(init_attr->srq); } - qp->sumem = ib_umem_get(udata, ucmd.sbuf_addr, + qp->sumem = ib_umem_get(pd->device, ucmd.sbuf_addr, ucmd.sbuf_size, 0); if (IS_ERR(qp->sumem)) { if (!is_srq) diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c index 98c8be71d91d..d330decfb80a 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c @@ -146,7 +146,7 @@ int pvrdma_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init_attr, goto err_srq; } - srq->umem = ib_umem_get(udata, ucmd.buf_addr, ucmd.buf_size, 0); + srq->umem = ib_umem_get(ibsrq->device, ucmd.buf_addr, ucmd.buf_size, 0); if (IS_ERR(srq->umem)) { ret = PTR_ERR(srq->umem); goto err_srq; diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index b9a76bf74857..72f6534fbb52 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -390,7 +390,7 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (length == 0) return ERR_PTR(-EINVAL); - umem = ib_umem_get(udata, start, length, mr_access_flags); + umem = ib_umem_get(pd->device, start, length, mr_access_flags); if (IS_ERR(umem)) return (void *)umem; diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 35a2baf2f364..e83c7b518bfa 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -169,7 +169,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, void *vaddr; int err; - umem = ib_umem_get(udata, start, length, access); + umem = ib_umem_get(pd->ibpd.device, start, length, access); if (IS_ERR(umem)) { pr_warn("err %d from rxe_umem_get\n", (int)PTR_ERR(umem)); diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 753f54e17e0a..e3518fd6b95b 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -69,7 +69,7 @@ static inline size_t ib_umem_num_pages(struct ib_umem *umem) #ifdef CONFIG_INFINIBAND_USER_MEM -struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, +struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, size_t size, int access); void ib_umem_release(struct ib_umem *umem); int ib_umem_page_count(struct ib_umem *umem); @@ -83,7 +83,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, #include -static inline struct ib_umem *ib_umem_get(struct ib_udata *udata, +static inline struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, size_t size, int access) { diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 81429acc8257..64314ff76612 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -114,9 +114,9 @@ static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING struct ib_umem_odp * -ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, size_t size, +ib_umem_odp_get(struct ib_device *device, unsigned long addr, size_t size, int access, const struct mmu_interval_notifier_ops *ops); -struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata, +struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_device *device, int access); struct ib_umem_odp * ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr, @@ -134,7 +134,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset, #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ static inline struct ib_umem_odp * -ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, size_t size, +ib_umem_odp_get(struct ib_device *device, unsigned long addr, size_t size, int access, const struct mmu_interval_notifier_ops *ops) { return ERR_PTR(-EINVAL); -- cgit v1.2.3-71-gd317 From 33006bd4f37f7d2c3d1cf0268b4f327b5fdc2558 Mon Sep 17 00:00:00 2001 From: Moni Shoua Date: Wed, 15 Jan 2020 14:43:32 +0200 Subject: IB/core: Introduce ib_reg_user_mr Add ib_reg_user_mr() for kernel ULPs to register user MRs. The common use case that uses this function is a userspace application that allocates memory for HCA access but the responsibility to register the memory at the HCA is on an kernel ULP. This ULP that acts as an agent for the userspace application. This function is intended to be used without a user context so vendor drivers need to be aware of calling reg_user_mr() device operation with udata equal to NULL. Among all drivers, i40iw is the only driver which relies on presence of udata, so check udata existence for that driver. Signed-off-by: Moni Shoua Reviewed-by: Guy Levi Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/verbs.c | 30 ++++++++++++++++++++++++++++++ drivers/infiniband/hw/efa/efa_verbs.c | 2 +- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 3 +++ include/rdma/ib_verbs.h | 6 ++++++ 4 files changed, 40 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index dd765e176cdd..7a69e4bbe877 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1990,6 +1990,36 @@ EXPORT_SYMBOL(ib_resize_cq); /* Memory regions */ +struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, + u64 virt_addr, int access_flags) +{ + struct ib_mr *mr; + + if (access_flags & IB_ACCESS_ON_DEMAND) { + if (!(pd->device->attrs.device_cap_flags & + IB_DEVICE_ON_DEMAND_PAGING)) { + pr_debug("ODP support not available\n"); + return ERR_PTR(-EINVAL); + } + } + + mr = pd->device->ops.reg_user_mr(pd, start, length, virt_addr, + access_flags, NULL); + + if (IS_ERR(mr)) + return mr; + + mr->device = pd->device; + mr->pd = pd; + mr->dm = NULL; + atomic_inc(&pd->usecnt); + mr->res.type = RDMA_RESTRACK_MR; + rdma_restrack_kadd(&mr->res); + + return mr; +} +EXPORT_SYMBOL(ib_reg_user_mr); + int ib_dereg_mr_user(struct ib_mr *mr, struct ib_udata *udata) { struct ib_pd *pd = mr->pd; diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c index 5c35aa72f515..4822f5fa12be 100644 --- a/drivers/infiniband/hw/efa/efa_verbs.c +++ b/drivers/infiniband/hw/efa/efa_verbs.c @@ -1358,7 +1358,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, int inline_size; int err; - if (udata->inlen && + if (udata && udata->inlen && !ib_is_udata_cleared(udata, 0, sizeof(udata->inlen))) { ibdev_dbg(&dev->ibdev, "Incompatible ABI params, udata not cleared\n"); diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index 96488fb443eb..c335de91508f 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -1756,6 +1756,9 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd, int ret; int pg_shift; + if (!udata) + return ERR_PTR(-EOPNOTSUPP); + if (iwdev->closing) return ERR_PTR(-ENODEV); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 5608e14e3aad..170d5ec95b79 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4153,6 +4153,12 @@ static inline void ib_dma_free_coherent(struct ib_device *dev, dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle); } +/* ib_reg_user_mr - register a memory region for virtual addresses from kernel + * space. This function should be called when 'current' is the owning MM. + */ +struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, + u64 virt_addr, int mr_access_flags); + /** * ib_dereg_mr_user - Deregisters a memory region and removes it from the * HCA translation table. -- cgit v1.2.3-71-gd317 From 87d8069f6b028793254ddd0a66df1d7b6d79b450 Mon Sep 17 00:00:00 2001 From: Moni Shoua Date: Wed, 15 Jan 2020 14:43:33 +0200 Subject: IB/core: Add interface to advise_mr for kernel users Allow ULPs to call advise_mr, so they can control ODP regions in the same way as user space applications. Signed-off-by: Moni Shoua Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/verbs.c | 11 +++++++++++ include/rdma/ib_verbs.h | 3 +++ 2 files changed, 14 insertions(+) (limited to 'include') diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 7a69e4bbe877..d33bdc9b01cd 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2020,6 +2020,17 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, } EXPORT_SYMBOL(ib_reg_user_mr); +int ib_advise_mr(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, + u32 flags, struct ib_sge *sg_list, u32 num_sge) +{ + if (!pd->device->ops.advise_mr) + return -EOPNOTSUPP; + + return pd->device->ops.advise_mr(pd, advice, flags, sg_list, num_sge, + NULL); +} +EXPORT_SYMBOL(ib_advise_mr); + int ib_dereg_mr_user(struct ib_mr *mr, struct ib_udata *udata) { struct ib_pd *pd = mr->pd; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 170d5ec95b79..e2cc62217cc2 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4159,6 +4159,9 @@ static inline void ib_dma_free_coherent(struct ib_device *dev, struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int mr_access_flags); +/* ib_advise_mr - give an advice about an address range in a memory region */ +int ib_advise_mr(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, + u32 flags, struct ib_sge *sg_list, u32 num_sge); /** * ib_dereg_mr_user - Deregisters a memory region and removes it from the * HCA translation table. -- cgit v1.2.3-71-gd317 From 4a7faaf4add3108522512c16c9ee08fb2ad4525e Mon Sep 17 00:00:00 2001 From: Jeremy Sowden Date: Wed, 1 Jan 2020 13:41:32 +0000 Subject: netfilter: nft_bitwise: correct uapi header comment. The comment documenting how bitwise expressions work includes a table which summarizes the mask and xor arguments combined to express the supported boolean operations. However, the row for OR: mask xor 0 x is incorrect. dreg = (sreg & 0) ^ x is not equivalent to: dreg = sreg | x What the code actually does is: dreg = (sreg & ~x) ^ x Update the documentation to match. Signed-off-by: Jeremy Sowden Signed-off-by: Pablo Neira Ayuso --- include/uapi/linux/netfilter/nf_tables.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'include') diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index e237ecbdcd8a..dd4611767933 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -501,7 +501,7 @@ enum nft_immediate_attributes { * * mask xor * NOT: 1 1 - * OR: 0 x + * OR: ~x x * XOR: 1 x * AND: x 0 */ -- cgit v1.2.3-71-gd317 From 445db8d09659eb27bcd5920cb91d91686f0197d0 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Sun, 5 Jan 2020 22:00:57 +0100 Subject: netfilter: flowtable: remove dying bit, use teardown bit instead The dying bit removes the conntrack entry if the netdev that owns this flow is going down. Instead, use the teardown mechanism to push back the flow to conntrack to let the classic software path decide what to do with it. Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_flow_table.h | 5 ----- net/netfilter/nf_flow_table_core.c | 8 +++----- 2 files changed, 3 insertions(+), 10 deletions(-) (limited to 'include') diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 415b8f49d150..4ad924d5f983 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -85,7 +85,6 @@ struct flow_offload_tuple_rhash { #define FLOW_OFFLOAD_SNAT 0x1 #define FLOW_OFFLOAD_DNAT 0x2 -#define FLOW_OFFLOAD_DYING 0x4 #define FLOW_OFFLOAD_TEARDOWN 0x8 #define FLOW_OFFLOAD_HW 0x10 #define FLOW_OFFLOAD_HW_DYING 0x20 @@ -134,10 +133,6 @@ int nf_flow_table_init(struct nf_flowtable *flow_table); void nf_flow_table_free(struct nf_flowtable *flow_table); void flow_offload_teardown(struct flow_offload *flow); -static inline void flow_offload_dead(struct flow_offload *flow) -{ - flow->flags |= FLOW_OFFLOAD_DYING; -} int nf_flow_snat_port(const struct flow_offload *flow, struct sk_buff *skb, unsigned int thoff, diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index 9e6de2bbeccb..a9ed93a9e007 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -182,8 +182,6 @@ void flow_offload_free(struct flow_offload *flow) default: break; } - if (flow->flags & FLOW_OFFLOAD_DYING) - nf_ct_delete(flow->ct, 0, 0); nf_ct_put(flow->ct); kfree_rcu(flow, rcu_head); } @@ -300,7 +298,7 @@ flow_offload_lookup(struct nf_flowtable *flow_table, dir = tuplehash->tuple.dir; flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]); - if (flow->flags & (FLOW_OFFLOAD_DYING | FLOW_OFFLOAD_TEARDOWN)) + if (flow->flags & FLOW_OFFLOAD_TEARDOWN) return NULL; if (unlikely(nf_ct_is_dying(flow->ct))) @@ -349,7 +347,7 @@ static void nf_flow_offload_gc_step(struct flow_offload *flow, void *data) struct nf_flowtable *flow_table = data; if (nf_flow_has_expired(flow) || nf_ct_is_dying(flow->ct) || - (flow->flags & (FLOW_OFFLOAD_DYING | FLOW_OFFLOAD_TEARDOWN))) { + (flow->flags & FLOW_OFFLOAD_TEARDOWN)) { if (flow->flags & FLOW_OFFLOAD_HW) { if (!(flow->flags & FLOW_OFFLOAD_HW_DYING)) nf_flow_offload_del(flow_table, flow); @@ -523,7 +521,7 @@ static void nf_flow_table_do_cleanup(struct flow_offload *flow, void *data) if (net_eq(nf_ct_net(flow->ct), dev_net(dev)) && (flow->tuplehash[0].tuple.iifidx == dev->ifindex || flow->tuplehash[1].tuple.iifidx == dev->ifindex)) - flow_offload_dead(flow); + flow_offload_teardown(flow); } static void nf_flow_table_iterate_cleanup(struct nf_flowtable *flowtable, -- cgit v1.2.3-71-gd317 From 355a8b13f87a8964ebe785b065f1388a1bd00c7e Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Sun, 5 Jan 2020 20:41:15 +0100 Subject: netfilter: flowtable: use atomic bitwise operations for flow flags Originally, all flow flag bits were set on only from the workqueue. With the introduction of the flow teardown state and hardware offload this is no longer true. Let's be safe and use atomic bitwise operation to operation with flow flags. Fixes: 59c466dd68e7 ("netfilter: nf_flow_table: add a new flow state for tearing down offloading") Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_flow_table.h | 16 +++++++++------- net/netfilter/nf_flow_table_core.c | 20 ++++++++++---------- net/netfilter/nf_flow_table_ip.c | 8 ++++---- net/netfilter/nf_flow_table_offload.c | 20 ++++++++++---------- 4 files changed, 33 insertions(+), 31 deletions(-) (limited to 'include') diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 4ad924d5f983..5a10e28c3e40 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -83,12 +83,14 @@ struct flow_offload_tuple_rhash { struct flow_offload_tuple tuple; }; -#define FLOW_OFFLOAD_SNAT 0x1 -#define FLOW_OFFLOAD_DNAT 0x2 -#define FLOW_OFFLOAD_TEARDOWN 0x8 -#define FLOW_OFFLOAD_HW 0x10 -#define FLOW_OFFLOAD_HW_DYING 0x20 -#define FLOW_OFFLOAD_HW_DEAD 0x40 +enum nf_flow_flags { + NF_FLOW_SNAT, + NF_FLOW_DNAT, + NF_FLOW_TEARDOWN, + NF_FLOW_HW, + NF_FLOW_HW_DYING, + NF_FLOW_HW_DEAD, +}; enum flow_offload_type { NF_FLOW_OFFLOAD_UNSPEC = 0, @@ -98,7 +100,7 @@ enum flow_offload_type { struct flow_offload { struct flow_offload_tuple_rhash tuplehash[FLOW_OFFLOAD_DIR_MAX]; struct nf_conn *ct; - u16 flags; + unsigned long flags; u16 type; u32 timeout; struct rcu_head rcu_head; diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index a9ed93a9e007..9f134f44d139 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -61,9 +61,9 @@ struct flow_offload *flow_offload_alloc(struct nf_conn *ct) flow_offload_fill_dir(flow, FLOW_OFFLOAD_DIR_REPLY); if (ct->status & IPS_SRC_NAT) - flow->flags |= FLOW_OFFLOAD_SNAT; + __set_bit(NF_FLOW_SNAT, &flow->flags); if (ct->status & IPS_DST_NAT) - flow->flags |= FLOW_OFFLOAD_DNAT; + __set_bit(NF_FLOW_DNAT, &flow->flags); return flow; @@ -269,7 +269,7 @@ static void flow_offload_del(struct nf_flowtable *flow_table, if (nf_flow_has_expired(flow)) flow_offload_fixup_ct(flow->ct); - else if (flow->flags & FLOW_OFFLOAD_TEARDOWN) + else if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) flow_offload_fixup_ct_timeout(flow->ct); flow_offload_free(flow); @@ -277,7 +277,7 @@ static void flow_offload_del(struct nf_flowtable *flow_table, void flow_offload_teardown(struct flow_offload *flow) { - flow->flags |= FLOW_OFFLOAD_TEARDOWN; + set_bit(NF_FLOW_TEARDOWN, &flow->flags); flow_offload_fixup_ct_state(flow->ct); } @@ -298,7 +298,7 @@ flow_offload_lookup(struct nf_flowtable *flow_table, dir = tuplehash->tuple.dir; flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]); - if (flow->flags & FLOW_OFFLOAD_TEARDOWN) + if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) return NULL; if (unlikely(nf_ct_is_dying(flow->ct))) @@ -347,16 +347,16 @@ static void nf_flow_offload_gc_step(struct flow_offload *flow, void *data) struct nf_flowtable *flow_table = data; if (nf_flow_has_expired(flow) || nf_ct_is_dying(flow->ct) || - (flow->flags & FLOW_OFFLOAD_TEARDOWN)) { - if (flow->flags & FLOW_OFFLOAD_HW) { - if (!(flow->flags & FLOW_OFFLOAD_HW_DYING)) + test_bit(NF_FLOW_TEARDOWN, &flow->flags)) { + if (test_bit(NF_FLOW_HW, &flow->flags)) { + if (!test_bit(NF_FLOW_HW_DYING, &flow->flags)) nf_flow_offload_del(flow_table, flow); - else if (flow->flags & FLOW_OFFLOAD_HW_DEAD) + else if (test_bit(NF_FLOW_HW_DEAD, &flow->flags)) flow_offload_del(flow_table, flow); } else { flow_offload_del(flow_table, flow); } - } else if (flow->flags & FLOW_OFFLOAD_HW) { + } else if (test_bit(NF_FLOW_HW, &flow->flags)) { nf_flow_offload_stats(flow_table, flow); } } diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index 7ea2ddc2aa93..f4ccb5f5008b 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -144,11 +144,11 @@ static int nf_flow_nat_ip(const struct flow_offload *flow, struct sk_buff *skb, { struct iphdr *iph = ip_hdr(skb); - if (flow->flags & FLOW_OFFLOAD_SNAT && + if (test_bit(NF_FLOW_SNAT, &flow->flags) && (nf_flow_snat_port(flow, skb, thoff, iph->protocol, dir) < 0 || nf_flow_snat_ip(flow, skb, iph, thoff, dir) < 0)) return -1; - if (flow->flags & FLOW_OFFLOAD_DNAT && + if (test_bit(NF_FLOW_DNAT, &flow->flags) && (nf_flow_dnat_port(flow, skb, thoff, iph->protocol, dir) < 0 || nf_flow_dnat_ip(flow, skb, iph, thoff, dir) < 0)) return -1; @@ -414,11 +414,11 @@ static int nf_flow_nat_ipv6(const struct flow_offload *flow, struct ipv6hdr *ip6h = ipv6_hdr(skb); unsigned int thoff = sizeof(*ip6h); - if (flow->flags & FLOW_OFFLOAD_SNAT && + if (test_bit(NF_FLOW_SNAT, &flow->flags) && (nf_flow_snat_port(flow, skb, thoff, ip6h->nexthdr, dir) < 0 || nf_flow_snat_ipv6(flow, skb, ip6h, thoff, dir) < 0)) return -1; - if (flow->flags & FLOW_OFFLOAD_DNAT && + if (test_bit(NF_FLOW_DNAT, &flow->flags) && (nf_flow_dnat_port(flow, skb, thoff, ip6h->nexthdr, dir) < 0 || nf_flow_dnat_ipv6(flow, skb, ip6h, thoff, dir) < 0)) return -1; diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index d161623107a1..8a1fe391666e 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -450,16 +450,16 @@ int nf_flow_rule_route_ipv4(struct net *net, const struct flow_offload *flow, flow_offload_eth_dst(net, flow, dir, flow_rule) < 0) return -1; - if (flow->flags & FLOW_OFFLOAD_SNAT) { + if (test_bit(NF_FLOW_SNAT, &flow->flags)) { flow_offload_ipv4_snat(net, flow, dir, flow_rule); flow_offload_port_snat(net, flow, dir, flow_rule); } - if (flow->flags & FLOW_OFFLOAD_DNAT) { + if (test_bit(NF_FLOW_DNAT, &flow->flags)) { flow_offload_ipv4_dnat(net, flow, dir, flow_rule); flow_offload_port_dnat(net, flow, dir, flow_rule); } - if (flow->flags & FLOW_OFFLOAD_SNAT || - flow->flags & FLOW_OFFLOAD_DNAT) + if (test_bit(NF_FLOW_SNAT, &flow->flags) || + test_bit(NF_FLOW_DNAT, &flow->flags)) flow_offload_ipv4_checksum(net, flow, flow_rule); flow_offload_redirect(flow, dir, flow_rule); @@ -476,11 +476,11 @@ int nf_flow_rule_route_ipv6(struct net *net, const struct flow_offload *flow, flow_offload_eth_dst(net, flow, dir, flow_rule) < 0) return -1; - if (flow->flags & FLOW_OFFLOAD_SNAT) { + if (test_bit(NF_FLOW_SNAT, &flow->flags)) { flow_offload_ipv6_snat(net, flow, dir, flow_rule); flow_offload_port_snat(net, flow, dir, flow_rule); } - if (flow->flags & FLOW_OFFLOAD_DNAT) { + if (test_bit(NF_FLOW_DNAT, &flow->flags)) { flow_offload_ipv6_dnat(net, flow, dir, flow_rule); flow_offload_port_dnat(net, flow, dir, flow_rule); } @@ -636,7 +636,7 @@ static void flow_offload_tuple_del(struct flow_offload_work *offload, list_for_each_entry(block_cb, &flowtable->flow_block.cb_list, list) block_cb->cb(TC_SETUP_CLSFLOWER, &cls_flow, block_cb->cb_priv); - offload->flow->flags |= FLOW_OFFLOAD_HW_DEAD; + set_bit(NF_FLOW_HW_DEAD, &offload->flow->flags); } static int flow_offload_rule_add(struct flow_offload_work *offload, @@ -723,7 +723,7 @@ static void flow_offload_work_handler(struct work_struct *work) case FLOW_CLS_REPLACE: ret = flow_offload_work_add(offload); if (ret < 0) - offload->flow->flags &= ~FLOW_OFFLOAD_HW; + __clear_bit(NF_FLOW_HW, &offload->flow->flags); break; case FLOW_CLS_DESTROY: flow_offload_work_del(offload); @@ -776,7 +776,7 @@ void nf_flow_offload_add(struct nf_flowtable *flowtable, if (!offload) return; - flow->flags |= FLOW_OFFLOAD_HW; + __set_bit(NF_FLOW_HW, &flow->flags); flow_offload_queue_work(offload); } @@ -789,7 +789,7 @@ void nf_flow_offload_del(struct nf_flowtable *flowtable, if (!offload) return; - flow->flags |= FLOW_OFFLOAD_HW_DYING; + set_bit(NF_FLOW_HW_DYING, &flow->flags); flow_offload_queue_work(offload); } -- cgit v1.2.3-71-gd317 From a5449cdcaac5c78d62b8bea8f79158071f23da01 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Tue, 7 Jan 2020 09:56:27 +0100 Subject: netfilter: flowtable: add nf_flowtable_hw_offload() helper function This function checks for the NF_FLOWTABLE_HW_OFFLOAD flag, meaning that the flowtable hardware offload is enabled. Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_flow_table.h | 5 +++++ net/netfilter/nf_flow_table_core.c | 2 +- net/netfilter/nf_flow_table_offload.c | 4 ++-- 3 files changed, 8 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 5a10e28c3e40..9ee1eaeaab04 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -47,6 +47,11 @@ struct nf_flowtable { possible_net_t net; }; +static inline bool nf_flowtable_hw_offload(struct nf_flowtable *flowtable) +{ + return flowtable->flags & NF_FLOWTABLE_HW_OFFLOAD; +} + enum flow_offload_tuple_dir { FLOW_OFFLOAD_DIR_ORIGINAL = IP_CT_DIR_ORIGINAL, FLOW_OFFLOAD_DIR_REPLY = IP_CT_DIR_REPLY, diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index 9f134f44d139..e919bafd68d1 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -243,7 +243,7 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow) return err; } - if (flow_table->flags & NF_FLOWTABLE_HW_OFFLOAD) + if (nf_flowtable_hw_offload(flow_table)) nf_flow_offload_add(flow_table, flow); return 0; diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index 8a1fe391666e..b4c79fbb2d82 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -812,7 +812,7 @@ void nf_flow_offload_stats(struct nf_flowtable *flowtable, void nf_flow_table_offload_flush(struct nf_flowtable *flowtable) { - if (flowtable->flags & NF_FLOWTABLE_HW_OFFLOAD) + if (nf_flowtable_hw_offload(flowtable)) flush_work(&nf_flow_offload_work); } @@ -849,7 +849,7 @@ int nf_flow_table_offload_setup(struct nf_flowtable *flowtable, struct flow_block_offload bo = {}; int err; - if (!(flowtable->flags & NF_FLOWTABLE_HW_OFFLOAD)) + if (!nf_flowtable_hw_offload(flowtable)) return 0; if (!dev->netdev_ops->ndo_setup_tc) -- cgit v1.2.3-71-gd317 From f698fe40829b21088d323c8b0a7c626571528fc6 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Mon, 6 Jan 2020 12:56:47 +0100 Subject: netfilter: flowtable: refresh flow if hardware offload fails If nf_flow_offload_add() fails to add the flow to hardware, then the NF_FLOW_HW_REFRESH flag bit is set and the flow remains in the flowtable software path. If flowtable hardware offload is enabled, this patch enqueues a new request to offload this flow to hardware. Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_flow_table.h | 1 + net/netfilter/nf_flow_table_core.c | 4 +++- net/netfilter/nf_flow_table_ip.c | 13 +++++++++++++ net/netfilter/nf_flow_table_offload.c | 14 +++++--------- 4 files changed, 22 insertions(+), 10 deletions(-) (limited to 'include') diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 9ee1eaeaab04..e0f709d9d547 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -95,6 +95,7 @@ enum nf_flow_flags { NF_FLOW_HW, NF_FLOW_HW_DYING, NF_FLOW_HW_DEAD, + NF_FLOW_HW_REFRESH, }; enum flow_offload_type { diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c index e919bafd68d1..7e91989a1b55 100644 --- a/net/netfilter/nf_flow_table_core.c +++ b/net/netfilter/nf_flow_table_core.c @@ -243,8 +243,10 @@ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow) return err; } - if (nf_flowtable_hw_offload(flow_table)) + if (nf_flowtable_hw_offload(flow_table)) { + __set_bit(NF_FLOW_HW, &flow->flags); nf_flow_offload_add(flow_table, flow); + } return 0; } diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index f4ccb5f5008b..9e563fd3da0f 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -232,6 +232,13 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb, return NF_STOLEN; } +static bool nf_flow_offload_refresh(struct nf_flowtable *flow_table, + struct flow_offload *flow) +{ + return nf_flowtable_hw_offload(flow_table) && + test_and_clear_bit(NF_FLOW_HW_REFRESH, &flow->flags); +} + unsigned int nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, const struct nf_hook_state *state) @@ -272,6 +279,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, if (nf_flow_state_check(flow, ip_hdr(skb)->protocol, skb, thoff)) return NF_ACCEPT; + if (unlikely(nf_flow_offload_refresh(flow_table, flow))) + nf_flow_offload_add(flow_table, flow); + if (nf_flow_offload_dst_check(&rt->dst)) { flow_offload_teardown(flow); return NF_ACCEPT; @@ -498,6 +508,9 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, sizeof(*ip6h))) return NF_ACCEPT; + if (unlikely(nf_flow_offload_refresh(flow_table, flow))) + nf_flow_offload_add(flow_table, flow); + if (nf_flow_offload_dst_check(&rt->dst)) { flow_offload_teardown(flow); return NF_ACCEPT; diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index b4c79fbb2d82..77b129f196c6 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -654,20 +654,20 @@ static int flow_offload_rule_add(struct flow_offload_work *offload, return 0; } -static int flow_offload_work_add(struct flow_offload_work *offload) +static void flow_offload_work_add(struct flow_offload_work *offload) { struct nf_flow_rule *flow_rule[FLOW_OFFLOAD_DIR_MAX]; int err; err = nf_flow_offload_alloc(offload, flow_rule); if (err < 0) - return -ENOMEM; + return; err = flow_offload_rule_add(offload, flow_rule); + if (err < 0) + set_bit(NF_FLOW_HW_REFRESH, &offload->flow->flags); nf_flow_offload_destroy(flow_rule); - - return err; } static void flow_offload_work_del(struct flow_offload_work *offload) @@ -712,7 +712,6 @@ static void flow_offload_work_handler(struct work_struct *work) { struct flow_offload_work *offload, *next; LIST_HEAD(offload_pending_list); - int ret; spin_lock_bh(&flow_offload_pending_list_lock); list_replace_init(&flow_offload_pending_list, &offload_pending_list); @@ -721,9 +720,7 @@ static void flow_offload_work_handler(struct work_struct *work) list_for_each_entry_safe(offload, next, &offload_pending_list, list) { switch (offload->cmd) { case FLOW_CLS_REPLACE: - ret = flow_offload_work_add(offload); - if (ret < 0) - __clear_bit(NF_FLOW_HW, &offload->flow->flags); + flow_offload_work_add(offload); break; case FLOW_CLS_DESTROY: flow_offload_work_del(offload); @@ -776,7 +773,6 @@ void nf_flow_offload_add(struct nf_flowtable *flowtable, if (!offload) return; - __set_bit(NF_FLOW_HW, &flow->flags); flow_offload_queue_work(offload); } -- cgit v1.2.3-71-gd317 From 9d1f979986c2e29632b6a8f7a8ef8b3c7d24a48c Mon Sep 17 00:00:00 2001 From: Jeremy Sowden Date: Wed, 15 Jan 2020 20:05:51 +0000 Subject: netfilter: bitwise: add NFTA_BITWISE_OP netlink attribute. Add a new bitwise netlink attribute, NFTA_BITWISE_OP, which is set to a value of a new enum, nft_bitwise_ops. It describes the type of operation an expression contains. Currently, it only has one value: NFT_BITWISE_BOOL. More values will be added later to implement shifts. Signed-off-by: Jeremy Sowden Signed-off-by: Pablo Neira Ayuso --- include/uapi/linux/netfilter/nf_tables.h | 12 ++++++++++++ net/netfilter/nft_bitwise.c | 16 ++++++++++++++++ 2 files changed, 28 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index dd4611767933..0cddf357281f 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -484,6 +484,16 @@ enum nft_immediate_attributes { }; #define NFTA_IMMEDIATE_MAX (__NFTA_IMMEDIATE_MAX - 1) +/** + * enum nft_bitwise_ops - nf_tables bitwise operations + * + * @NFT_BITWISE_BOOL: mask-and-xor operation used to implement NOT, AND, OR and + * XOR boolean operations + */ +enum nft_bitwise_ops { + NFT_BITWISE_BOOL, +}; + /** * enum nft_bitwise_attributes - nf_tables bitwise expression netlink attributes * @@ -492,6 +502,7 @@ enum nft_immediate_attributes { * @NFTA_BITWISE_LEN: length of operands (NLA_U32) * @NFTA_BITWISE_MASK: mask value (NLA_NESTED: nft_data_attributes) * @NFTA_BITWISE_XOR: xor value (NLA_NESTED: nft_data_attributes) + * @NFTA_BITWISE_OP: type of operation (NLA_U32: nft_bitwise_ops) * * The bitwise expression performs the following operation: * @@ -512,6 +523,7 @@ enum nft_bitwise_attributes { NFTA_BITWISE_LEN, NFTA_BITWISE_MASK, NFTA_BITWISE_XOR, + NFTA_BITWISE_OP, __NFTA_BITWISE_MAX }; #define NFTA_BITWISE_MAX (__NFTA_BITWISE_MAX - 1) diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c index c15e9beb5243..6948df7b0587 100644 --- a/net/netfilter/nft_bitwise.c +++ b/net/netfilter/nft_bitwise.c @@ -18,6 +18,7 @@ struct nft_bitwise { enum nft_registers sreg:8; enum nft_registers dreg:8; + enum nft_bitwise_ops op:8; u8 len; struct nft_data mask; struct nft_data xor; @@ -41,6 +42,7 @@ static const struct nla_policy nft_bitwise_policy[NFTA_BITWISE_MAX + 1] = { [NFTA_BITWISE_LEN] = { .type = NLA_U32 }, [NFTA_BITWISE_MASK] = { .type = NLA_NESTED }, [NFTA_BITWISE_XOR] = { .type = NLA_NESTED }, + [NFTA_BITWISE_OP] = { .type = NLA_U32 }, }; static int nft_bitwise_init(const struct nft_ctx *ctx, @@ -76,6 +78,18 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, if (err < 0) return err; + if (tb[NFTA_BITWISE_OP]) { + priv->op = ntohl(nla_get_be32(tb[NFTA_BITWISE_OP])); + switch (priv->op) { + case NFT_BITWISE_BOOL: + break; + default: + return -EOPNOTSUPP; + } + } else { + priv->op = NFT_BITWISE_BOOL; + } + err = nft_data_init(NULL, &priv->mask, sizeof(priv->mask), &d1, tb[NFTA_BITWISE_MASK]); if (err < 0) @@ -112,6 +126,8 @@ static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr) return -1; if (nla_put_be32(skb, NFTA_BITWISE_LEN, htonl(priv->len))) return -1; + if (nla_put_be32(skb, NFTA_BITWISE_OP, htonl(priv->op))) + return -1; if (nft_data_dump(skb, NFTA_BITWISE_MASK, &priv->mask, NFT_DATA_VALUE, priv->len) < 0) -- cgit v1.2.3-71-gd317 From 779f725e142cd987a69296c0eb7d416ee3ba57dd Mon Sep 17 00:00:00 2001 From: Jeremy Sowden Date: Wed, 15 Jan 2020 20:05:56 +0000 Subject: netfilter: bitwise: add NFTA_BITWISE_DATA attribute. Add a new bitwise netlink attribute that will be used by shift operations to store the size of the shift. It is not used by boolean operations. Signed-off-by: Jeremy Sowden Signed-off-by: Pablo Neira Ayuso --- include/uapi/linux/netfilter/nf_tables.h | 3 +++ net/netfilter/nft_bitwise.c | 5 +++++ 2 files changed, 8 insertions(+) (limited to 'include') diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index 0cddf357281f..8bef0620bc4f 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -503,6 +503,8 @@ enum nft_bitwise_ops { * @NFTA_BITWISE_MASK: mask value (NLA_NESTED: nft_data_attributes) * @NFTA_BITWISE_XOR: xor value (NLA_NESTED: nft_data_attributes) * @NFTA_BITWISE_OP: type of operation (NLA_U32: nft_bitwise_ops) + * @NFTA_BITWISE_DATA: argument for non-boolean operations + * (NLA_NESTED: nft_data_attributes) * * The bitwise expression performs the following operation: * @@ -524,6 +526,7 @@ enum nft_bitwise_attributes { NFTA_BITWISE_MASK, NFTA_BITWISE_XOR, NFTA_BITWISE_OP, + NFTA_BITWISE_DATA, __NFTA_BITWISE_MAX }; #define NFTA_BITWISE_MAX (__NFTA_BITWISE_MAX - 1) diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c index b4619d9989ea..744008a527fb 100644 --- a/net/netfilter/nft_bitwise.c +++ b/net/netfilter/nft_bitwise.c @@ -22,6 +22,7 @@ struct nft_bitwise { u8 len; struct nft_data mask; struct nft_data xor; + struct nft_data data; }; static void nft_bitwise_eval_bool(u32 *dst, const u32 *src, @@ -54,6 +55,7 @@ static const struct nla_policy nft_bitwise_policy[NFTA_BITWISE_MAX + 1] = { [NFTA_BITWISE_MASK] = { .type = NLA_NESTED }, [NFTA_BITWISE_XOR] = { .type = NLA_NESTED }, [NFTA_BITWISE_OP] = { .type = NLA_U32 }, + [NFTA_BITWISE_DATA] = { .type = NLA_NESTED }, }; static int nft_bitwise_init_bool(struct nft_bitwise *priv, @@ -62,6 +64,9 @@ static int nft_bitwise_init_bool(struct nft_bitwise *priv, struct nft_data_desc d1, d2; int err; + if (tb[NFTA_BITWISE_DATA]) + return -EINVAL; + if (!tb[NFTA_BITWISE_MASK] || !tb[NFTA_BITWISE_XOR]) return -EINVAL; -- cgit v1.2.3-71-gd317 From 567d746b55bc66d3800c9ae91d50f0c5deb2fd93 Mon Sep 17 00:00:00 2001 From: Jeremy Sowden Date: Wed, 15 Jan 2020 20:05:57 +0000 Subject: netfilter: bitwise: add support for shifts. Hitherto nft_bitwise has only supported boolean operations: NOT, AND, OR and XOR. Extend it to do shifts as well. Signed-off-by: Jeremy Sowden Signed-off-by: Pablo Neira Ayuso --- include/uapi/linux/netfilter/nf_tables.h | 9 +++- net/netfilter/nft_bitwise.c | 77 ++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index 8bef0620bc4f..261864736b26 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -489,9 +489,13 @@ enum nft_immediate_attributes { * * @NFT_BITWISE_BOOL: mask-and-xor operation used to implement NOT, AND, OR and * XOR boolean operations + * @NFT_BITWISE_LSHIFT: left-shift operation + * @NFT_BITWISE_RSHIFT: right-shift operation */ enum nft_bitwise_ops { NFT_BITWISE_BOOL, + NFT_BITWISE_LSHIFT, + NFT_BITWISE_RSHIFT, }; /** @@ -506,11 +510,12 @@ enum nft_bitwise_ops { * @NFTA_BITWISE_DATA: argument for non-boolean operations * (NLA_NESTED: nft_data_attributes) * - * The bitwise expression performs the following operation: + * The bitwise expression supports boolean and shift operations. It implements + * the boolean operations by performing the following operation: * * dreg = (sreg & mask) ^ xor * - * which allow to express all bitwise operations: + * with these mask and xor values: * * mask xor * NOT: 1 1 diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c index 744008a527fb..0ed2281f03be 100644 --- a/net/netfilter/nft_bitwise.c +++ b/net/netfilter/nft_bitwise.c @@ -34,6 +34,32 @@ static void nft_bitwise_eval_bool(u32 *dst, const u32 *src, dst[i] = (src[i] & priv->mask.data[i]) ^ priv->xor.data[i]; } +static void nft_bitwise_eval_lshift(u32 *dst, const u32 *src, + const struct nft_bitwise *priv) +{ + u32 shift = priv->data.data[0]; + unsigned int i; + u32 carry = 0; + + for (i = DIV_ROUND_UP(priv->len, sizeof(u32)); i > 0; i--) { + dst[i - 1] = (src[i - 1] << shift) | carry; + carry = src[i - 1] >> (BITS_PER_TYPE(u32) - shift); + } +} + +static void nft_bitwise_eval_rshift(u32 *dst, const u32 *src, + const struct nft_bitwise *priv) +{ + u32 shift = priv->data.data[0]; + unsigned int i; + u32 carry = 0; + + for (i = 0; i < DIV_ROUND_UP(priv->len, sizeof(u32)); i++) { + dst[i] = carry | (src[i] >> shift); + carry = src[i] << (BITS_PER_TYPE(u32) - shift); + } +} + void nft_bitwise_eval(const struct nft_expr *expr, struct nft_regs *regs, const struct nft_pktinfo *pkt) { @@ -45,6 +71,12 @@ void nft_bitwise_eval(const struct nft_expr *expr, case NFT_BITWISE_BOOL: nft_bitwise_eval_bool(dst, src, priv); break; + case NFT_BITWISE_LSHIFT: + nft_bitwise_eval_lshift(dst, src, priv); + break; + case NFT_BITWISE_RSHIFT: + nft_bitwise_eval_rshift(dst, src, priv); + break; } } @@ -97,6 +129,32 @@ err1: return err; } +static int nft_bitwise_init_shift(struct nft_bitwise *priv, + const struct nlattr *const tb[]) +{ + struct nft_data_desc d; + int err; + + if (tb[NFTA_BITWISE_MASK] || + tb[NFTA_BITWISE_XOR]) + return -EINVAL; + + if (!tb[NFTA_BITWISE_DATA]) + return -EINVAL; + + err = nft_data_init(NULL, &priv->data, sizeof(priv->data), &d, + tb[NFTA_BITWISE_DATA]); + if (err < 0) + return err; + if (d.type != NFT_DATA_VALUE || d.len != sizeof(u32) || + priv->data.data[0] >= BITS_PER_TYPE(u32)) { + nft_data_release(&priv->data, d.type); + return -EINVAL; + } + + return 0; +} + static int nft_bitwise_init(const struct nft_ctx *ctx, const struct nft_expr *expr, const struct nlattr * const tb[]) @@ -131,6 +189,8 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, priv->op = ntohl(nla_get_be32(tb[NFTA_BITWISE_OP])); switch (priv->op) { case NFT_BITWISE_BOOL: + case NFT_BITWISE_LSHIFT: + case NFT_BITWISE_RSHIFT: break; default: return -EOPNOTSUPP; @@ -143,6 +203,10 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, case NFT_BITWISE_BOOL: err = nft_bitwise_init_bool(priv, tb); break; + case NFT_BITWISE_LSHIFT: + case NFT_BITWISE_RSHIFT: + err = nft_bitwise_init_shift(priv, tb); + break; } return err; @@ -162,6 +226,15 @@ static int nft_bitwise_dump_bool(struct sk_buff *skb, return 0; } +static int nft_bitwise_dump_shift(struct sk_buff *skb, + const struct nft_bitwise *priv) +{ + if (nft_data_dump(skb, NFTA_BITWISE_DATA, &priv->data, + NFT_DATA_VALUE, sizeof(u32)) < 0) + return -1; + return 0; +} + static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr) { const struct nft_bitwise *priv = nft_expr_priv(expr); @@ -180,6 +253,10 @@ static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr) case NFT_BITWISE_BOOL: err = nft_bitwise_dump_bool(skb, priv); break; + case NFT_BITWISE_LSHIFT: + case NFT_BITWISE_RSHIFT: + err = nft_bitwise_dump_shift(skb, priv); + break; } return err; -- cgit v1.2.3-71-gd317 From f397464eb7c25bda903ec8b9cf5701e72a1f7b16 Mon Sep 17 00:00:00 2001 From: Eran Ben Elisha Date: Mon, 7 Oct 2019 10:29:46 +0300 Subject: net/mlx5: Add structures layout for new MCAM access reg groups MCAM has 3 access_reg_groups (0-2). Defines data structures in order to read and parse access_reg_groups #1 and #2. Signed-off-by: Eran Ben Elisha Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) (limited to 'include') diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index c6abaf4f1c55..43cdf9211747 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -8832,6 +8832,28 @@ struct mlx5_ifc_mcam_access_reg_bits { u8 regs_31_to_0[0x20]; }; +struct mlx5_ifc_mcam_access_reg_bits1 { + u8 regs_127_to_96[0x20]; + + u8 regs_95_to_64[0x20]; + + u8 regs_63_to_32[0x20]; + + u8 regs_31_to_0[0x20]; +}; + +struct mlx5_ifc_mcam_access_reg_bits2 { + u8 regs_127_to_99[0x1d]; + u8 mirc[0x1]; + u8 regs_97_to_96[0x2]; + + u8 regs_95_to_64[0x20]; + + u8 regs_63_to_32[0x20]; + + u8 regs_31_to_0[0x20]; +}; + struct mlx5_ifc_mcam_reg_bits { u8 reserved_at_0[0x8]; u8 feature_group[0x8]; @@ -8842,6 +8864,8 @@ struct mlx5_ifc_mcam_reg_bits { union { struct mlx5_ifc_mcam_access_reg_bits access_regs; + struct mlx5_ifc_mcam_access_reg_bits1 access_regs1; + struct mlx5_ifc_mcam_access_reg_bits2 access_regs2; u8 reserved_at_0[0x80]; } mng_access_reg_cap_mask; -- cgit v1.2.3-71-gd317 From 932ef155117cc5caf1108bd27664dab974ba6e89 Mon Sep 17 00:00:00 2001 From: Eran Ben Elisha Date: Mon, 7 Oct 2019 10:31:42 +0300 Subject: net/mlx5: Read MCAM register groups 1 and 2 On load, Driver caches MCAM (Management Capabilities Mask Register) registers. in addition to the only MCAM register group (0) the driver already reads, here we add support for reading groups 1 and 2. Signed-off-by: Eran Ben Elisha Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/fw.c | 15 +++++++++------ include/linux/mlx5/device.h | 14 +++++++++++++- include/linux/mlx5/driver.h | 2 +- 3 files changed, 23 insertions(+), 8 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c index c375edfe528c..d89ff1d09119 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c @@ -131,11 +131,11 @@ static int mlx5_get_pcam_reg(struct mlx5_core_dev *dev) MLX5_PCAM_REGS_5000_TO_507F); } -static int mlx5_get_mcam_reg(struct mlx5_core_dev *dev) +static int mlx5_get_mcam_access_reg_group(struct mlx5_core_dev *dev, + enum mlx5_mcam_reg_groups group) { - return mlx5_query_mcam_reg(dev, dev->caps.mcam, - MLX5_MCAM_FEATURE_ENHANCED_FEATURES, - MLX5_MCAM_REGS_FIRST_128); + return mlx5_query_mcam_reg(dev, dev->caps.mcam[group], + MLX5_MCAM_FEATURE_ENHANCED_FEATURES, group); } static int mlx5_get_qcam_reg(struct mlx5_core_dev *dev) @@ -221,8 +221,11 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev) if (MLX5_CAP_GEN(dev, pcam_reg)) mlx5_get_pcam_reg(dev); - if (MLX5_CAP_GEN(dev, mcam_reg)) - mlx5_get_mcam_reg(dev); + if (MLX5_CAP_GEN(dev, mcam_reg)) { + mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_FIRST_128); + mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9080_0x90FF); + mlx5_get_mcam_access_reg_group(dev, MLX5_MCAM_REGS_0x9100_0x917F); + } if (MLX5_CAP_GEN(dev, qcam_reg)) mlx5_get_qcam_reg(dev); diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index 1a1c53f0262d..0e62c3db45e5 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -1121,6 +1121,9 @@ enum mlx5_pcam_feature_groups { enum mlx5_mcam_reg_groups { MLX5_MCAM_REGS_FIRST_128 = 0x0, + MLX5_MCAM_REGS_0x9080_0x90FF = 0x1, + MLX5_MCAM_REGS_0x9100_0x917F = 0x2, + MLX5_MCAM_REGS_NUM = 0x3, }; enum mlx5_mcam_feature_groups { @@ -1269,7 +1272,16 @@ enum mlx5_qcam_feature_groups { MLX5_GET(pcam_reg, (mdev)->caps.pcam, port_access_reg_cap_mask.regs_5000_to_507f.reg) #define MLX5_CAP_MCAM_REG(mdev, reg) \ - MLX5_GET(mcam_reg, (mdev)->caps.mcam, mng_access_reg_cap_mask.access_regs.reg) + MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_FIRST_128], \ + mng_access_reg_cap_mask.access_regs.reg) + +#define MLX5_CAP_MCAM_REG1(mdev, reg) \ + MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9080_0x90FF], \ + mng_access_reg_cap_mask.access_regs1.reg) + +#define MLX5_CAP_MCAM_REG2(mdev, reg) \ + MLX5_GET(mcam_reg, (mdev)->caps.mcam[MLX5_MCAM_REGS_0x9100_0x917F], \ + mng_access_reg_cap_mask.access_regs2.reg) #define MLX5_CAP_MCAM_FEATURE(mdev, fld) \ MLX5_GET(mcam_reg, (mdev)->caps.mcam, mng_feature_cap_mask.enhanced_features.fld) diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 27200dea0297..54431256af42 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -684,7 +684,7 @@ struct mlx5_core_dev { u32 hca_cur[MLX5_CAP_NUM][MLX5_UN_SZ_DW(hca_cap_union)]; u32 hca_max[MLX5_CAP_NUM][MLX5_UN_SZ_DW(hca_cap_union)]; u32 pcam[MLX5_ST_SZ_DW(pcam_reg)]; - u32 mcam[MLX5_ST_SZ_DW(mcam_reg)]; + u32 mcam[MLX5_MCAM_REGS_NUM][MLX5_ST_SZ_DW(mcam_reg)]; u32 fpga[MLX5_ST_SZ_DW(fpga_cap)]; u32 qcam[MLX5_ST_SZ_DW(qcam_reg)]; u8 embedded_cpu; -- cgit v1.2.3-71-gd317 From bab58ba10ecfa39c46d280d2acbca6054e1e863d Mon Sep 17 00:00:00 2001 From: Eran Ben Elisha Date: Mon, 7 Oct 2019 10:30:32 +0300 Subject: net/mlx5: Add structures and defines for MIRC register Add needed structures, layouts and defines for MIRC (Management Image Re-activation Control) register. This structure will be used for the FSM reactivation flow in the downstream patches. Signed-off-by: Eran Ben Elisha Signed-off-by: Saeed Mahameed --- include/linux/mlx5/driver.h | 1 + include/linux/mlx5/mlx5_ifc.h | 8 ++++++++ 2 files changed, 9 insertions(+) (limited to 'include') diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 54431256af42..7848b9858587 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -145,6 +145,7 @@ enum { MLX5_REG_MCC = 0x9062, MLX5_REG_MCDA = 0x9063, MLX5_REG_MCAM = 0x907f, + MLX5_REG_MIRC = 0x9162, }; enum mlx5_qpts_trust_state { diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 43cdf9211747..a133583c3e4f 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -9471,6 +9471,13 @@ struct mlx5_ifc_mcda_reg_bits { u8 data[0][0x20]; }; +struct mlx5_ifc_mirc_reg_bits { + u8 reserved_at_0[0x18]; + u8 status_code[0x8]; + + u8 reserved_at_20[0x20]; +}; + union mlx5_ifc_ports_control_registers_document_bits { struct mlx5_ifc_bufferx_reg_bits bufferx_reg; struct mlx5_ifc_eth_2819_cntrs_grp_data_layout_bits eth_2819_cntrs_grp_data_layout; @@ -9526,6 +9533,7 @@ union mlx5_ifc_ports_control_registers_document_bits { struct mlx5_ifc_mcqi_reg_bits mcqi_reg; struct mlx5_ifc_mcc_reg_bits mcc_reg; struct mlx5_ifc_mcda_reg_bits mcda_reg; + struct mlx5_ifc_mirc_reg_bits mirc_reg; u8 reserved_at_0[0x60e0]; }; -- cgit v1.2.3-71-gd317 From 609b82727f719b41b50440c4028d48d0b2e04913 Mon Sep 17 00:00:00 2001 From: Aya Levin Date: Mon, 4 Nov 2019 14:51:55 +0200 Subject: net/mlx5: Expose resource dump register mapping Add new register enumeration for resource dump. Add layout mapping for resource dump: access command and response. Signed-off-by: Aya Levin Reviewed-by: Moshe Shemesh Signed-off-by: Saeed Mahameed --- include/linux/mlx5/driver.h | 1 + include/linux/mlx5/mlx5_ifc.h | 130 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 130 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 7848b9858587..c821fa4d7475 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -146,6 +146,7 @@ enum { MLX5_REG_MCDA = 0x9063, MLX5_REG_MCAM = 0x907f, MLX5_REG_MIRC = 0x9162, + MLX5_REG_RESOURCE_DUMP = 0xC000, }; enum mlx5_qpts_trust_state { diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index a133583c3e4f..6fe0431e11ec 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -823,7 +823,9 @@ struct mlx5_ifc_qos_cap_bits { struct mlx5_ifc_debug_cap_bits { u8 core_dump_general[0x1]; u8 core_dump_qp[0x1]; - u8 reserved_at_2[0x1e]; + u8 reserved_at_2[0x7]; + u8 resource_dump[0x1]; + u8 reserved_at_a[0x16]; u8 reserved_at_20[0x2]; u8 stall_detect[0x1]; @@ -1767,6 +1769,132 @@ struct mlx5_ifc_resize_field_select_bits { u8 resize_field_select[0x20]; }; +struct mlx5_ifc_resource_dump_bits { + u8 more_dump[0x1]; + u8 inline_dump[0x1]; + u8 reserved_at_2[0xa]; + u8 seq_num[0x4]; + u8 segment_type[0x10]; + + u8 reserved_at_20[0x10]; + u8 vhca_id[0x10]; + + u8 index1[0x20]; + + u8 index2[0x20]; + + u8 num_of_obj1[0x10]; + u8 num_of_obj2[0x10]; + + u8 reserved_at_a0[0x20]; + + u8 device_opaque[0x40]; + + u8 mkey[0x20]; + + u8 size[0x20]; + + u8 address[0x40]; + + u8 inline_data[52][0x20]; +}; + +struct mlx5_ifc_resource_dump_menu_record_bits { + u8 reserved_at_0[0x4]; + u8 num_of_obj2_supports_active[0x1]; + u8 num_of_obj2_supports_all[0x1]; + u8 must_have_num_of_obj2[0x1]; + u8 support_num_of_obj2[0x1]; + u8 num_of_obj1_supports_active[0x1]; + u8 num_of_obj1_supports_all[0x1]; + u8 must_have_num_of_obj1[0x1]; + u8 support_num_of_obj1[0x1]; + u8 must_have_index2[0x1]; + u8 support_index2[0x1]; + u8 must_have_index1[0x1]; + u8 support_index1[0x1]; + u8 segment_type[0x10]; + + u8 segment_name[4][0x20]; + + u8 index1_name[4][0x20]; + + u8 index2_name[4][0x20]; +}; + +struct mlx5_ifc_resource_dump_segment_header_bits { + u8 length_dw[0x10]; + u8 segment_type[0x10]; +}; + +struct mlx5_ifc_resource_dump_command_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; + + u8 segment_called[0x10]; + u8 vhca_id[0x10]; + + u8 index1[0x20]; + + u8 index2[0x20]; + + u8 num_of_obj1[0x10]; + u8 num_of_obj2[0x10]; +}; + +struct mlx5_ifc_resource_dump_error_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; + + u8 reserved_at_20[0x10]; + u8 syndrome_id[0x10]; + + u8 reserved_at_40[0x40]; + + u8 error[8][0x20]; +}; + +struct mlx5_ifc_resource_dump_info_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; + + u8 reserved_at_20[0x18]; + u8 dump_version[0x8]; + + u8 hw_version[0x20]; + + u8 fw_version[0x20]; +}; + +struct mlx5_ifc_resource_dump_menu_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; + + u8 reserved_at_20[0x10]; + u8 num_of_records[0x10]; + + struct mlx5_ifc_resource_dump_menu_record_bits record[0]; +}; + +struct mlx5_ifc_resource_dump_resource_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; + + u8 reserved_at_20[0x20]; + + u8 index1[0x20]; + + u8 index2[0x20]; + + u8 payload[0][0x20]; +}; + +struct mlx5_ifc_resource_dump_terminate_segment_bits { + struct mlx5_ifc_resource_dump_segment_header_bits segment_header; +}; + +struct mlx5_ifc_menu_resource_dump_response_bits { + struct mlx5_ifc_resource_dump_info_segment_bits info; + struct mlx5_ifc_resource_dump_command_segment_bits cmd; + struct mlx5_ifc_resource_dump_menu_segment_bits menu; + struct mlx5_ifc_resource_dump_terminate_segment_bits terminate; +}; + enum { MLX5_MODIFY_FIELD_SELECT_MODIFY_FIELD_SELECT_CQ_PERIOD = 0x1, MLX5_MODIFY_FIELD_SELECT_MODIFY_FIELD_SELECT_CQ_MAX_COUNT = 0x2, -- cgit v1.2.3-71-gd317 From 31d8bde1c8812c9b44065dcd98e554488c6a98d2 Mon Sep 17 00:00:00 2001 From: Hamdan Igbaria Date: Thu, 9 Jan 2020 13:26:53 +0200 Subject: net/mlx5: Add copy header action struct layout Add definition for copy header action, copy action is used to copy header fields from source to destination. Signed-off-by: Hamdan Igbaria Signed-off-by: Alex Vesker Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'include') diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 6fe0431e11ec..23613a6ea51c 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -5609,6 +5609,21 @@ struct mlx5_ifc_add_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + union mlx5_ifc_set_action_in_add_action_in_auto_bits { struct mlx5_ifc_set_action_in_bits set_action_in; struct mlx5_ifc_add_action_in_bits add_action_in; @@ -5618,6 +5633,7 @@ union mlx5_ifc_set_action_in_add_action_in_auto_bits { enum { MLX5_ACTION_TYPE_SET = 0x1, MLX5_ACTION_TYPE_ADD = 0x2, + MLX5_ACTION_TYPE_COPY = 0x3, }; enum { -- cgit v1.2.3-71-gd317 From 822e114b50641d3b57d2eb30939e60d8b4758288 Mon Sep 17 00:00:00 2001 From: Paul Blakey Date: Mon, 1 Apr 2019 13:31:32 +0300 Subject: net/mlx5: Add mlx5_ifc definitions for connection tracking support Add the required hardware definitions to mlx5_ifc: ignore_flow_level, registers, copy_header, and fwd_and_modify cap. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Sholomo Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) (limited to 'include') diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 23613a6ea51c..e9c165ffe3f9 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -375,8 +375,17 @@ struct mlx5_ifc_flow_table_fields_supported_bits { u8 outer_esp_spi[0x1]; u8 reserved_at_58[0x2]; u8 bth_dst_qp[0x1]; + u8 reserved_at_5b[0x5]; - u8 reserved_at_5b[0x25]; + u8 reserved_at_60[0x18]; + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; }; struct mlx5_ifc_flow_table_prop_layout_bits { @@ -401,7 +410,8 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 reformat_l3_tunnel_to_l2[0x1]; u8 reformat_l2_to_l3_tunnel[0x1]; u8 reformat_and_modify_action[0x1]; - u8 reserved_at_15[0x2]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_16[0x1]; u8 table_miss_action_domain[0x1]; u8 termination_table[0x1]; u8 reserved_at_19[0x7]; @@ -722,7 +732,9 @@ enum { struct mlx5_ifc_flow_table_eswitch_cap_bits { u8 fdb_to_vport_reg_c_id[0x8]; - u8 reserved_at_8[0xf]; + u8 reserved_at_8[0xd]; + u8 fdb_modify_header_fwd_to_table[0x1]; + u8 reserved_at_16[0x1]; u8 flow_source[0x1]; u8 reserved_at_18[0x2]; u8 multi_fdb_encap[0x1]; @@ -4141,7 +4153,8 @@ struct mlx5_ifc_set_fte_in_bits { u8 reserved_at_a0[0x8]; u8 table_id[0x18]; - u8 reserved_at_c0[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; u8 modify_enable_mask[0x8]; u8 reserved_at_e0[0x20]; @@ -5627,6 +5640,7 @@ struct mlx5_ifc_copy_action_in_bits { union mlx5_ifc_set_action_in_add_action_in_auto_bits { struct mlx5_ifc_set_action_in_bits set_action_in; struct mlx5_ifc_add_action_in_bits add_action_in; + struct mlx5_ifc_copy_action_in_bits copy_action_in; u8 reserved_at_0[0x40]; }; @@ -5669,6 +5683,8 @@ enum { MLX5_ACTION_IN_FIELD_METADATA_REG_C_3 = 0x54, MLX5_ACTION_IN_FIELD_METADATA_REG_C_4 = 0x55, MLX5_ACTION_IN_FIELD_METADATA_REG_C_5 = 0x56, + MLX5_ACTION_IN_FIELD_METADATA_REG_C_6 = 0x57, + MLX5_ACTION_IN_FIELD_METADATA_REG_C_7 = 0x58, MLX5_ACTION_IN_FIELD_OUT_TCP_SEQ_NUM = 0x59, MLX5_ACTION_IN_FIELD_OUT_TCP_ACK_NUM = 0x5B, }; -- cgit v1.2.3-71-gd317 From a58837f52d432f32995b1c00e803cc4db18762d3 Mon Sep 17 00:00:00 2001 From: Aya Levin Date: Mon, 30 Dec 2019 14:22:57 +0200 Subject: net/mlx5e: Expose FEC feilds and related capability bit Introduce 50G per lane FEC modes capability bit and newly supported fields in PPLM register which allow this configuration. Signed-off-by: Aya Levin Reviewed-by: Eran Ben Elisha Signed-off-by: Saeed Mahameed --- include/linux/mlx5/mlx5_ifc.h | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index e9c165ffe3f9..2ab4562b4851 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -8581,6 +8581,18 @@ struct mlx5_ifc_pplm_reg_bits { u8 fec_override_admin_50g[0x4]; u8 fec_override_admin_25g[0x4]; u8 fec_override_admin_10g_40g[0x4]; + + u8 fec_override_cap_400g_8x[0x10]; + u8 fec_override_cap_200g_4x[0x10]; + + u8 fec_override_cap_100g_2x[0x10]; + u8 fec_override_cap_50g_1x[0x10]; + + u8 fec_override_admin_400g_8x[0x10]; + u8 fec_override_admin_200g_4x[0x10]; + + u8 fec_override_admin_100g_2x[0x10]; + u8 fec_override_admin_50g_1x[0x10]; }; struct mlx5_ifc_ppcnt_reg_bits { @@ -8907,7 +8919,9 @@ struct mlx5_ifc_mpegc_reg_bits { }; struct mlx5_ifc_pcam_enhanced_features_bits { - u8 reserved_at_0[0x6d]; + u8 reserved_at_0[0x68]; + u8 fec_50G_per_lane_in_pplm[0x1]; + u8 reserved_at_69[0x4]; u8 rx_icrc_encapsulated_counter[0x1]; u8 reserved_at_6e[0x4]; u8 ptys_extended_ethernet[0x1]; -- cgit v1.2.3-71-gd317 From 827a8cb2dd2b72848652b2a425bba3262808ff44 Mon Sep 17 00:00:00 2001 From: Aharon Landau Date: Mon, 16 Dec 2019 12:50:13 +0200 Subject: net/mlx5e: Add discard counters per priority Add counters that count (per priority) the number of received packets that dropped due to lack of buffers on a physical port. If this counter is increasing, it implies that the adapter is congested and cannot absorb the traffic coming from the network. Signed-off-by: Aharon Landau Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 1 + include/linux/mlx5/mlx5_ifc.h | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 9f09253f9f46..33081b7edbf3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -1130,6 +1130,7 @@ static void mlx5e_grp_per_port_buffer_congest_update_stats(struct mlx5e_priv *pr static const struct counter_desc pport_per_prio_traffic_stats_desc[] = { { "rx_prio%d_bytes", PPORT_PER_PRIO_OFF(rx_octets) }, { "rx_prio%d_packets", PPORT_PER_PRIO_OFF(rx_frames) }, + { "rx_prio%d_discards", PPORT_PER_PRIO_OFF(rx_discards) }, { "tx_prio%d_bytes", PPORT_PER_PRIO_OFF(tx_octets) }, { "tx_prio%d_packets", PPORT_PER_PRIO_OFF(tx_frames) }, }; diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 2ab4562b4851..ee0a34d66c7c 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -2180,7 +2180,9 @@ struct mlx5_ifc_eth_per_prio_grp_data_layout_bits { u8 rx_pause_transition_low[0x20]; - u8 reserved_at_3c0[0x40]; + u8 rx_discards_high[0x20]; + + u8 rx_discards_low[0x20]; u8 device_stall_minor_watermark_cnt_high[0x20]; -- cgit v1.2.3-71-gd317 From 61dc7b0141c51f5fa4aed97e49f9cf102ec51479 Mon Sep 17 00:00:00 2001 From: Paul Blakey Date: Thu, 14 Nov 2019 16:59:58 +0200 Subject: net/mlx5: Refactor mlx5_create_auto_grouped_flow_table Refactor mlx5_create_auto_grouped_flow_table() to use ft_attr param which already carries the max_fte, prio and flags memebers, and is used the same in similar mlx5_create_flow_table() function. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/infiniband/hw/mlx5/main.c | 10 ++++++---- .../net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c | 9 ++++++--- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 13 ++++++++----- drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 7 +++++-- .../ethernet/mellanox/mlx5/core/eswitch_offloads.c | 13 +++++++------ .../mellanox/mlx5/core/eswitch_offloads_termtbl.c | 11 ++++++----- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 21 ++++++--------------- include/linux/mlx5/fs.h | 16 ++++++++-------- 8 files changed, 52 insertions(+), 48 deletions(-) (limited to 'include') diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 51100350b688..f3e6abfa4872 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -3239,12 +3239,14 @@ static struct mlx5_ib_flow_prio *_get_prio(struct mlx5_flow_namespace *ns, int num_entries, int num_groups, u32 flags) { + struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *ft; - ft = mlx5_create_auto_grouped_flow_table(ns, priority, - num_entries, - num_groups, - 0, flags); + ft_attr.prio = priority; + ft_attr.max_fte = num_entries; + ft_attr.flags = flags; + ft_attr.autogroup.max_num_groups = num_groups; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); if (IS_ERR(ft)) return ERR_CAST(ft); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c index acd946f2ddbe..3bc2ac3d53fc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c @@ -58,6 +58,7 @@ static struct mlx5e_ethtool_table *get_flow_table(struct mlx5e_priv *priv, struct ethtool_rx_flow_spec *fs, int num_tuples) { + struct mlx5_flow_table_attr ft_attr = {}; struct mlx5e_ethtool_table *eth_ft; struct mlx5_flow_namespace *ns; struct mlx5_flow_table *ft; @@ -102,9 +103,11 @@ static struct mlx5e_ethtool_table *get_flow_table(struct mlx5e_priv *priv, table_size = min_t(u32, BIT(MLX5_CAP_FLOWTABLE(priv->mdev, flow_table_properties_nic_receive.log_max_ft_size)), MLX5E_ETHTOOL_NUM_ENTRIES); - ft = mlx5_create_auto_grouped_flow_table(ns, prio, - table_size, - MLX5E_ETHTOOL_NUM_GROUPS, 0, 0); + + ft_attr.prio = prio; + ft_attr.max_fte = table_size; + ft_attr.autogroup.max_num_groups = MLX5E_ETHTOOL_NUM_GROUPS; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); if (IS_ERR(ft)) return (void *)ft; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 9b32a9c0f497..80d91e8abcfa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -960,7 +960,8 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, mutex_lock(&priv->fs.tc.t_lock); if (IS_ERR_OR_NULL(priv->fs.tc.t)) { - int tc_grp_size, tc_tbl_size; + struct mlx5_flow_table_attr ft_attr = {}; + int tc_grp_size, tc_tbl_size, tc_num_grps; u32 max_flow_counter; max_flow_counter = (MLX5_CAP_GEN(dev, max_flow_counter_31_16) << 16) | @@ -970,13 +971,15 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv, tc_tbl_size = min_t(int, tc_grp_size * MLX5E_TC_TABLE_NUM_GROUPS, BIT(MLX5_CAP_FLOWTABLE_NIC_RX(dev, log_max_ft_size))); + tc_num_grps = MLX5E_TC_TABLE_NUM_GROUPS; + ft_attr.prio = MLX5E_TC_PRIO; + ft_attr.max_fte = tc_tbl_size; + ft_attr.level = MLX5E_TC_FT_LEVEL; + ft_attr.autogroup.max_num_groups = tc_num_grps; priv->fs.tc.t = mlx5_create_auto_grouped_flow_table(priv->fs.ns, - MLX5E_TC_PRIO, - tc_tbl_size, - MLX5E_TC_TABLE_NUM_GROUPS, - MLX5E_TC_FT_LEVEL, 0); + &ft_attr); if (IS_ERR(priv->fs.tc.t)) { mutex_unlock(&priv->fs.tc.t_lock); NL_SET_ERR_MSG_MOD(extack, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c index 2c965ad0d744..05b13a1e829c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -277,6 +277,7 @@ enum { static int esw_create_legacy_vepa_table(struct mlx5_eswitch *esw) { + struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_core_dev *dev = esw->dev; struct mlx5_flow_namespace *root_ns; struct mlx5_flow_table *fdb; @@ -289,8 +290,10 @@ static int esw_create_legacy_vepa_table(struct mlx5_eswitch *esw) } /* num FTE 2, num FG 2 */ - fdb = mlx5_create_auto_grouped_flow_table(root_ns, LEGACY_VEPA_PRIO, - 2, 2, 0, 0); + ft_attr.prio = LEGACY_VEPA_PRIO; + ft_attr.max_fte = 2; + ft_attr.autogroup.max_num_groups = 2; + fdb = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr); if (IS_ERR(fdb)) { err = PTR_ERR(fdb); esw_warn(dev, "Failed to create VEPA FDB err %d\n", err); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index 243a5440867e..4b0d992263b1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -904,6 +904,7 @@ create_next_size_table(struct mlx5_eswitch *esw, int level, u32 flags) { + struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *fdb; int sz; @@ -911,12 +912,12 @@ create_next_size_table(struct mlx5_eswitch *esw, if (!sz) return ERR_PTR(-ENOSPC); - fdb = mlx5_create_auto_grouped_flow_table(ns, - table_prio, - sz, - ESW_OFFLOADS_NUM_GROUPS, - level, - flags); + ft_attr.max_fte = sz; + ft_attr.prio = table_prio; + ft_attr.level = level; + ft_attr.flags = flags; + ft_attr.autogroup.max_num_groups = ESW_OFFLOADS_NUM_GROUPS; + fdb = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); if (IS_ERR(fdb)) { esw_warn(esw->dev, "Failed to create FDB Table err %d (table prio: %d, level: %d, size: %d)\n", (int)PTR_ERR(fdb), table_prio, level, sz); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c index 366bda1bb1c3..dc08ed9339ab 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c @@ -50,8 +50,8 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev, struct mlx5_flow_act *flow_act) { static const struct mlx5_flow_spec spec = {}; + struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_namespace *root_ns; - int prio, flags; int err; root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_FDB); @@ -63,10 +63,11 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev, /* As this is the terminating action then the termination table is the * same prio as the slow path */ - prio = FDB_SLOW_PATH; - flags = MLX5_FLOW_TABLE_TERMINATION; - tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, prio, 1, 1, - 0, flags); + ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION; + ft_attr.prio = FDB_SLOW_PATH; + ft_attr.max_fte = 1; + ft_attr.autogroup.max_num_groups = 1; + tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr); if (IS_ERR(tt->termtbl)) { esw_warn(dev, "Failed to create termination table\n"); return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index d60577484567..04ebdd6cf670 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -1111,31 +1111,22 @@ EXPORT_SYMBOL(mlx5_create_lag_demux_flow_table); struct mlx5_flow_table* mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, - int prio, - int num_flow_table_entries, - int max_num_groups, - u32 level, - u32 flags) + struct mlx5_flow_table_attr *ft_attr) { - struct mlx5_flow_table_attr ft_attr = {}; struct mlx5_flow_table *ft; - if (max_num_groups > num_flow_table_entries) + if (ft_attr->autogroup.max_num_groups > ft_attr->max_fte) return ERR_PTR(-EINVAL); - ft_attr.max_fte = num_flow_table_entries; - ft_attr.prio = prio; - ft_attr.level = level; - ft_attr.flags = flags; - - ft = mlx5_create_flow_table(ns, &ft_attr); + ft = mlx5_create_flow_table(ns, ft_attr); if (IS_ERR(ft)) return ft; ft->autogroup.active = true; - ft->autogroup.required_groups = max_num_groups; + ft->autogroup.required_groups = ft_attr->autogroup.max_num_groups; /* We save place for flow groups in addition to max types */ - ft->autogroup.group_size = ft->max_fte / (max_num_groups + 1); + ft->autogroup.group_size = ft->max_fte / + (ft->autogroup.required_groups + 1); return ft; } diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 4e5b84e66822..a3f8b63839de 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -145,25 +145,25 @@ mlx5_get_flow_vport_acl_namespace(struct mlx5_core_dev *dev, enum mlx5_flow_namespace_type type, int vport); -struct mlx5_flow_table * -mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, - int prio, - int num_flow_table_entries, - int max_num_groups, - u32 level, - u32 flags); - struct mlx5_flow_table_attr { int prio; int max_fte; u32 level; u32 flags; + + struct { + int max_num_groups; + } autogroup; }; struct mlx5_flow_table * mlx5_create_flow_table(struct mlx5_flow_namespace *ns, struct mlx5_flow_table_attr *ft_attr); +struct mlx5_flow_table * +mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, + struct mlx5_flow_table_attr *ft_attr); + struct mlx5_flow_table * mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns, int prio, -- cgit v1.2.3-71-gd317 From 5281a0c909194c477656e89401ac11dd7b29ad2d Mon Sep 17 00:00:00 2001 From: Paul Blakey Date: Tue, 23 Jul 2019 11:43:57 +0300 Subject: net/mlx5: fs_core: Introduce unmanaged flow tables Currently, Most of the steering tree is statically declared ahead of time, with steering prios instances allocated for each fdb chain to assign max number of levels for each of them. This allows fs_core to manage the connections and levels of the flow tables hierarcy to prevent loops, but restricts us with the number of supported chains and priorities. Introduce unmananged flow tables, allowing the user to manage the flow table connections. A unamanged table is detached from the fs_core flow table hierarcy, and is only connected back to the hierarchy by explicit FTEs forward actions. This will be used together with firmware that supports ignoring the flow table levels to increase the number of supported chains and prios. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 41 ++++++++++++++++------- include/linux/mlx5/fs.h | 2 ++ 2 files changed, 31 insertions(+), 12 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 51913e2cde5c..456d3739b166 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -1006,7 +1006,8 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa u16 vport) { struct mlx5_flow_root_namespace *root = find_root(&ns->node); - struct mlx5_flow_table *next_ft = NULL; + bool unmanaged = ft_attr->flags & MLX5_FLOW_TABLE_UNMANAGED; + struct mlx5_flow_table *next_ft; struct fs_prio *fs_prio = NULL; struct mlx5_flow_table *ft; int log_table_sz; @@ -1023,14 +1024,21 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa err = -EINVAL; goto unlock_root; } - if (ft_attr->level >= fs_prio->num_levels) { - err = -ENOSPC; - goto unlock_root; + if (!unmanaged) { + /* The level is related to the + * priority level range. + */ + if (ft_attr->level >= fs_prio->num_levels) { + err = -ENOSPC; + goto unlock_root; + } + + ft_attr->level += fs_prio->start_level; } + /* The level is related to the * priority level range. */ - ft_attr->level += fs_prio->start_level; ft = alloc_flow_table(ft_attr->level, vport, ft_attr->max_fte ? roundup_pow_of_two(ft_attr->max_fte) : 0, @@ -1043,19 +1051,27 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa tree_init_node(&ft->node, del_hw_flow_table, del_sw_flow_table); log_table_sz = ft->max_fte ? ilog2(ft->max_fte) : 0; - next_ft = find_next_chained_ft(fs_prio); + next_ft = unmanaged ? ft_attr->next_ft : + find_next_chained_ft(fs_prio); ft->def_miss_action = ns->def_miss_action; err = root->cmds->create_flow_table(root, ft, log_table_sz, next_ft); if (err) goto free_ft; - err = connect_flow_table(root->dev, ft, fs_prio); - if (err) - goto destroy_ft; + if (!unmanaged) { + err = connect_flow_table(root->dev, ft, fs_prio); + if (err) + goto destroy_ft; + } + ft->node.active = true; down_write_ref_node(&fs_prio->node, false); - tree_add_node(&ft->node, &fs_prio->node); - list_add_flow_table(ft, fs_prio); + if (!unmanaged) { + tree_add_node(&ft->node, &fs_prio->node); + list_add_flow_table(ft, fs_prio); + } else { + ft->node.root = fs_prio->node.root; + } fs_prio->num_ft++; up_write_ref_node(&fs_prio->node, false); mutex_unlock(&root->chain_lock); @@ -2024,7 +2040,8 @@ int mlx5_destroy_flow_table(struct mlx5_flow_table *ft) int err = 0; mutex_lock(&root->chain_lock); - err = disconnect_flow_table(ft); + if (!(ft->flags & MLX5_FLOW_TABLE_UNMANAGED)) + err = disconnect_flow_table(ft); if (err) { mutex_unlock(&root->chain_lock); return err; diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index a3f8b63839de..de2c838bae90 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -48,6 +48,7 @@ enum { MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT = BIT(0), MLX5_FLOW_TABLE_TUNNEL_EN_DECAP = BIT(1), MLX5_FLOW_TABLE_TERMINATION = BIT(2), + MLX5_FLOW_TABLE_UNMANAGED = BIT(3), }; #define LEFTOVERS_RULE_NUM 2 @@ -150,6 +151,7 @@ struct mlx5_flow_table_attr { int max_fte; u32 level; u32 flags; + struct mlx5_flow_table *next_ft; struct { int max_num_groups; -- cgit v1.2.3-71-gd317 From ff189b43568216c6211e9e7ddd9026cb8295e744 Mon Sep 17 00:00:00 2001 From: Paul Blakey Date: Sun, 5 Jan 2020 15:15:54 +0200 Subject: net/mlx5: Add ignore level support fwd to table rules If user sets ignore flow level flag on a rule, that rule can point to a flow table of any level, including those with levels equal or less than the level of the flow table it is added on. This with unamanged tables will be used to create a FDB chain/prio hierarchy much larger than currently supported level range. Signed-off-by: Paul Blakey Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c | 3 +++ drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 18 +++++++++++++++--- include/linux/mlx5/fs.h | 1 + 3 files changed, 19 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c index 3c816e81f8d9..b25465d9e030 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c @@ -432,6 +432,9 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev, MLX5_SET(set_fte_in, in, table_type, ft->type); MLX5_SET(set_fte_in, in, table_id, ft->id); MLX5_SET(set_fte_in, in, flow_index, fte->index); + MLX5_SET(set_fte_in, in, ignore_flow_level, + !!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL)); + if (ft->vport) { MLX5_SET(set_fte_in, in, vport_number, ft->vport); MLX5_SET(set_fte_in, in, other_vport, 1); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 456d3739b166..355a424f4506 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -1536,18 +1536,30 @@ static bool counter_is_valid(u32 action) } static bool dest_is_valid(struct mlx5_flow_destination *dest, - u32 action, + struct mlx5_flow_act *flow_act, struct mlx5_flow_table *ft) { + bool ignore_level = flow_act->flags & FLOW_ACT_IGNORE_FLOW_LEVEL; + u32 action = flow_act->action; + if (dest && (dest->type == MLX5_FLOW_DESTINATION_TYPE_COUNTER)) return counter_is_valid(action); if (!(action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST)) return true; + if (ignore_level) { + if (ft->type != FS_FT_FDB) + return false; + + if (dest->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE && + dest->ft->type != FS_FT_FDB) + return false; + } + if (!dest || ((dest->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE) && - (dest->ft->level <= ft->level))) + (dest->ft->level <= ft->level && !ignore_level))) return false; return true; } @@ -1777,7 +1789,7 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft, return ERR_PTR(-EINVAL); for (i = 0; i < dest_num; i++) { - if (!dest_is_valid(&dest[i], flow_act->action, ft)) + if (!dest_is_valid(&dest[i], flow_act, ft)) return ERR_PTR(-EINVAL); } nested_down_read_ref_node(&ft->node, FS_LOCK_GRANDPARENT); diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index de2c838bae90..81f393fb7d96 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -196,6 +196,7 @@ struct mlx5_fs_vlan { enum { FLOW_ACT_NO_APPEND = BIT(0), + FLOW_ACT_IGNORE_FLOW_LEVEL = BIT(1), }; struct mlx5_flow_act { -- cgit v1.2.3-71-gd317 From 79cdb0aaea8b5478db34afa1d4d5ecc808689a67 Mon Sep 17 00:00:00 2001 From: Paul Blakey Date: Thu, 14 Nov 2019 17:02:59 +0200 Subject: net/mlx5: Allow creating autogroups with reserved entries Exclude the last n entries for an autogrouped flow table. Reserving entries at the end of the FT will ensure that this FG will be the last to be evaluated. This will be used in the next patch to create a miss group enabling custom actions on FT miss. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 26 +++++++++++++++-------- drivers/net/ethernet/mellanox/mlx5/core/fs_core.h | 1 + include/linux/mlx5/fs.h | 1 + 3 files changed, 19 insertions(+), 9 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 355a424f4506..c7a16ae05fa8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -579,7 +579,9 @@ static void del_sw_flow_group(struct fs_node *node) rhashtable_destroy(&fg->ftes_hash); ida_destroy(&fg->fte_allocator); - if (ft->autogroup.active && fg->max_ftes == ft->autogroup.group_size) + if (ft->autogroup.active && + fg->max_ftes == ft->autogroup.group_size && + fg->start_index < ft->autogroup.max_fte) ft->autogroup.num_groups--; err = rhltable_remove(&ft->fgs_hash, &fg->hash, @@ -1121,9 +1123,14 @@ struct mlx5_flow_table* mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, struct mlx5_flow_table_attr *ft_attr) { + int num_reserved_entries = ft_attr->autogroup.num_reserved_entries; + int autogroups_max_fte = ft_attr->max_fte - num_reserved_entries; + int max_num_groups = ft_attr->autogroup.max_num_groups; struct mlx5_flow_table *ft; - if (ft_attr->autogroup.max_num_groups > ft_attr->max_fte) + if (max_num_groups > autogroups_max_fte) + return ERR_PTR(-EINVAL); + if (num_reserved_entries > ft_attr->max_fte) return ERR_PTR(-EINVAL); ft = mlx5_create_flow_table(ns, ft_attr); @@ -1131,10 +1138,10 @@ mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, return ft; ft->autogroup.active = true; - ft->autogroup.required_groups = ft_attr->autogroup.max_num_groups; + ft->autogroup.required_groups = max_num_groups; + ft->autogroup.max_fte = autogroups_max_fte; /* We save place for flow groups in addition to max types */ - ft->autogroup.group_size = ft->max_fte / - (ft->autogroup.required_groups + 1); + ft->autogroup.group_size = autogroups_max_fte / (max_num_groups + 1); return ft; } @@ -1156,7 +1163,7 @@ struct mlx5_flow_group *mlx5_create_flow_group(struct mlx5_flow_table *ft, struct mlx5_flow_group *fg; int err; - if (ft->autogroup.active) + if (ft->autogroup.active && start_index < ft->autogroup.max_fte) return ERR_PTR(-EPERM); down_write_ref_node(&ft->node, false); @@ -1329,9 +1336,10 @@ static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft const struct mlx5_flow_spec *spec) { struct list_head *prev = &ft->node.children; - struct mlx5_flow_group *fg; + u32 max_fte = ft->autogroup.max_fte; unsigned int candidate_index = 0; unsigned int group_size = 0; + struct mlx5_flow_group *fg; if (!ft->autogroup.active) return ERR_PTR(-ENOENT); @@ -1339,7 +1347,7 @@ static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft if (ft->autogroup.num_groups < ft->autogroup.required_groups) group_size = ft->autogroup.group_size; - /* ft->max_fte == ft->autogroup.max_types */ + /* max_fte == ft->autogroup.max_types */ if (group_size == 0) group_size = 1; @@ -1352,7 +1360,7 @@ static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft prev = &fg->node.list; } - if (candidate_index + group_size > ft->max_fte) + if (candidate_index + group_size > max_fte) return ERR_PTR(-ENOSPC); fg = alloc_insert_flow_group(ft, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h index c2621b911563..be5f5e32c1e8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h @@ -164,6 +164,7 @@ struct mlx5_flow_table { unsigned int required_groups; unsigned int group_size; unsigned int num_groups; + unsigned int max_fte; } autogroup; /* Protect fwd_rules */ struct mutex lock; diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 81f393fb7d96..4cae16016b2b 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -155,6 +155,7 @@ struct mlx5_flow_table_attr { struct { int max_num_groups; + int num_reserved_entries; } autogroup; }; -- cgit v1.2.3-71-gd317 From 75ccae62cb8d42a619323a85c577107b8b37d797 Mon Sep 17 00:00:00 2001 From: Toke Høiland-Jørgensen Date: Thu, 16 Jan 2020 16:14:44 +0100 Subject: xdp: Move devmap bulk queue into struct net_device MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Commit 96360004b862 ("xdp: Make devmap flush_list common for all map instances"), changed devmap flushing to be a global operation instead of a per-map operation. However, the queue structure used for bulking was still allocated as part of the containing map. This patch moves the devmap bulk queue into struct net_device. The motivation for this is reusing it for the non-map variant of XDP_REDIRECT, which will be changed in a subsequent commit. To avoid other fields of struct net_device moving to different cache lines, we also move a couple of other members around. We defer the actual allocation of the bulk queue structure until the NETDEV_REGISTER notification devmap.c. This makes it possible to check for ndo_xdp_xmit support before allocating the structure, which is not possible at the time struct net_device is allocated. However, we keep the freeing in free_netdev() to avoid adding another RCU callback on NETDEV_UNREGISTER. Because of this change, we lose the reference back to the map that originated the redirect, so change the tracepoint to always return 0 as the map ID and index. Otherwise no functional change is intended with this patch. After this patch, the relevant part of struct net_device looks like this, according to pahole: /* --- cacheline 14 boundary (896 bytes) --- */ struct netdev_queue * _tx __attribute__((__aligned__(64))); /* 896 8 */ unsigned int num_tx_queues; /* 904 4 */ unsigned int real_num_tx_queues; /* 908 4 */ struct Qdisc * qdisc; /* 912 8 */ unsigned int tx_queue_len; /* 920 4 */ spinlock_t tx_global_lock; /* 924 4 */ struct xdp_dev_bulk_queue * xdp_bulkq; /* 928 8 */ struct xps_dev_maps * xps_cpus_map; /* 936 8 */ struct xps_dev_maps * xps_rxqs_map; /* 944 8 */ struct mini_Qdisc * miniq_egress; /* 952 8 */ /* --- cacheline 15 boundary (960 bytes) --- */ struct hlist_head qdisc_hash[16]; /* 960 128 */ /* --- cacheline 17 boundary (1088 bytes) --- */ struct timer_list watchdog_timer; /* 1088 40 */ /* XXX last struct has 4 bytes of padding */ int watchdog_timeo; /* 1128 4 */ /* XXX 4 bytes hole, try to pack */ struct list_head todo_list; /* 1136 16 */ /* --- cacheline 18 boundary (1152 bytes) --- */ Signed-off-by: Toke Høiland-Jørgensen Signed-off-by: Alexei Starovoitov Acked-by: Björn Töpel Acked-by: John Fastabend Link: https://lore.kernel.org/bpf/157918768397.1458396.12673224324627072349.stgit@toke.dk --- include/linux/netdevice.h | 13 ++++++---- include/trace/events/xdp.h | 2 +- kernel/bpf/devmap.c | 63 ++++++++++++++++++++-------------------------- net/core/dev.c | 2 ++ 4 files changed, 38 insertions(+), 42 deletions(-) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 2741aa35bec6..5ec3537fbdb1 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -876,6 +876,7 @@ enum bpf_netdev_command { struct bpf_prog_offload_ops; struct netlink_ext_ack; struct xdp_umem; +struct xdp_dev_bulk_queue; struct netdev_bpf { enum bpf_netdev_command command; @@ -1986,12 +1987,10 @@ struct net_device { unsigned int num_tx_queues; unsigned int real_num_tx_queues; struct Qdisc *qdisc; -#ifdef CONFIG_NET_SCHED - DECLARE_HASHTABLE (qdisc_hash, 4); -#endif unsigned int tx_queue_len; spinlock_t tx_global_lock; - int watchdog_timeo; + + struct xdp_dev_bulk_queue __percpu *xdp_bulkq; #ifdef CONFIG_XPS struct xps_dev_maps __rcu *xps_cpus_map; @@ -2001,11 +2000,15 @@ struct net_device { struct mini_Qdisc __rcu *miniq_egress; #endif +#ifdef CONFIG_NET_SCHED + DECLARE_HASHTABLE (qdisc_hash, 4); +#endif /* These may be needed for future network-power-down code. */ struct timer_list watchdog_timer; + int watchdog_timeo; - int __percpu *pcpu_refcnt; struct list_head todo_list; + int __percpu *pcpu_refcnt; struct list_head link_watch_list; diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h index a7378bcd9928..72bad13d4a3c 100644 --- a/include/trace/events/xdp.h +++ b/include/trace/events/xdp.h @@ -278,7 +278,7 @@ TRACE_EVENT(xdp_devmap_xmit, ), TP_fast_assign( - __entry->map_id = map->id; + __entry->map_id = map ? map->id : 0; __entry->act = XDP_REDIRECT; __entry->map_index = map_index; __entry->drops = drops; diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index da9c832fc5c8..030d125c3839 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -53,13 +53,11 @@ (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) #define DEV_MAP_BULK_SIZE 16 -struct bpf_dtab_netdev; - -struct xdp_bulk_queue { +struct xdp_dev_bulk_queue { struct xdp_frame *q[DEV_MAP_BULK_SIZE]; struct list_head flush_node; + struct net_device *dev; struct net_device *dev_rx; - struct bpf_dtab_netdev *obj; unsigned int count; }; @@ -67,9 +65,8 @@ struct bpf_dtab_netdev { struct net_device *dev; /* must be first member, due to tracepoint */ struct hlist_node index_hlist; struct bpf_dtab *dtab; - struct xdp_bulk_queue __percpu *bulkq; struct rcu_head rcu; - unsigned int idx; /* keep track of map index for tracepoint */ + unsigned int idx; }; struct bpf_dtab { @@ -219,7 +216,6 @@ static void dev_map_free(struct bpf_map *map) hlist_for_each_entry_safe(dev, next, head, index_hlist) { hlist_del_rcu(&dev->index_hlist); - free_percpu(dev->bulkq); dev_put(dev->dev); kfree(dev); } @@ -234,7 +230,6 @@ static void dev_map_free(struct bpf_map *map) if (!dev) continue; - free_percpu(dev->bulkq); dev_put(dev->dev); kfree(dev); } @@ -320,10 +315,9 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key, return -ENOENT; } -static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags) +static int bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags) { - struct bpf_dtab_netdev *obj = bq->obj; - struct net_device *dev = obj->dev; + struct net_device *dev = bq->dev; int sent = 0, drops = 0, err = 0; int i; @@ -346,8 +340,7 @@ static int bq_xmit_all(struct xdp_bulk_queue *bq, u32 flags) out: bq->count = 0; - trace_xdp_devmap_xmit(&obj->dtab->map, obj->idx, - sent, drops, bq->dev_rx, dev, err); + trace_xdp_devmap_xmit(NULL, 0, sent, drops, bq->dev_rx, dev, err); bq->dev_rx = NULL; __list_del_clearprev(&bq->flush_node); return 0; @@ -374,7 +367,7 @@ error: void __dev_map_flush(void) { struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); - struct xdp_bulk_queue *bq, *tmp; + struct xdp_dev_bulk_queue *bq, *tmp; rcu_read_lock(); list_for_each_entry_safe(bq, tmp, flush_list, flush_node) @@ -401,12 +394,12 @@ struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key) /* Runs under RCU-read-side, plus in softirq under NAPI protection. * Thus, safe percpu variable access. */ -static int bq_enqueue(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf, +static int bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, struct net_device *dev_rx) { struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); - struct xdp_bulk_queue *bq = this_cpu_ptr(obj->bulkq); + struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) bq_xmit_all(bq, 0); @@ -444,7 +437,7 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, if (unlikely(!xdpf)) return -EOVERFLOW; - return bq_enqueue(dst, xdpf, dev_rx); + return bq_enqueue(dev, xdpf, dev_rx); } int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, @@ -483,7 +476,6 @@ static void __dev_map_entry_free(struct rcu_head *rcu) struct bpf_dtab_netdev *dev; dev = container_of(rcu, struct bpf_dtab_netdev, rcu); - free_percpu(dev->bulkq); dev_put(dev->dev); kfree(dev); } @@ -538,30 +530,15 @@ static struct bpf_dtab_netdev *__dev_map_alloc_node(struct net *net, u32 ifindex, unsigned int idx) { - gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; struct bpf_dtab_netdev *dev; - struct xdp_bulk_queue *bq; - int cpu; - dev = kmalloc_node(sizeof(*dev), gfp, dtab->map.numa_node); + dev = kmalloc_node(sizeof(*dev), GFP_ATOMIC | __GFP_NOWARN, + dtab->map.numa_node); if (!dev) return ERR_PTR(-ENOMEM); - dev->bulkq = __alloc_percpu_gfp(sizeof(*dev->bulkq), - sizeof(void *), gfp); - if (!dev->bulkq) { - kfree(dev); - return ERR_PTR(-ENOMEM); - } - - for_each_possible_cpu(cpu) { - bq = per_cpu_ptr(dev->bulkq, cpu); - bq->obj = dev; - } - dev->dev = dev_get_by_index(net, ifindex); if (!dev->dev) { - free_percpu(dev->bulkq); kfree(dev); return ERR_PTR(-EINVAL); } @@ -721,9 +698,23 @@ static int dev_map_notification(struct notifier_block *notifier, { struct net_device *netdev = netdev_notifier_info_to_dev(ptr); struct bpf_dtab *dtab; - int i; + int i, cpu; switch (event) { + case NETDEV_REGISTER: + if (!netdev->netdev_ops->ndo_xdp_xmit || netdev->xdp_bulkq) + break; + + /* will be freed in free_netdev() */ + netdev->xdp_bulkq = + __alloc_percpu_gfp(sizeof(struct xdp_dev_bulk_queue), + sizeof(void *), GFP_ATOMIC); + if (!netdev->xdp_bulkq) + return NOTIFY_BAD; + + for_each_possible_cpu(cpu) + per_cpu_ptr(netdev->xdp_bulkq, cpu)->dev = netdev; + break; case NETDEV_UNREGISTER: /* This rcu_read_lock/unlock pair is needed because * dev_map_list is an RCU list AND to ensure a delete diff --git a/net/core/dev.c b/net/core/dev.c index d99f88c58636..e7802a41ae7f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9847,6 +9847,8 @@ void free_netdev(struct net_device *dev) free_percpu(dev->pcpu_refcnt); dev->pcpu_refcnt = NULL; + free_percpu(dev->xdp_bulkq); + dev->xdp_bulkq = NULL; netdev_unregister_lockdep_key(dev); -- cgit v1.2.3-71-gd317 From 1d233886dd904edbf239eeffe435c3308ae97625 Mon Sep 17 00:00:00 2001 From: Toke Høiland-Jørgensen Date: Thu, 16 Jan 2020 16:14:45 +0100 Subject: xdp: Use bulking for non-map XDP_REDIRECT and consolidate code paths MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Since the bulk queue used by XDP_REDIRECT now lives in struct net_device, we can re-use the bulking for the non-map version of the bpf_redirect() helper. This is a simple matter of having xdp_do_redirect_slow() queue the frame on the bulk queue instead of sending it out with __bpf_tx_xdp(). Unfortunately we can't make the bpf_redirect() helper return an error if the ifindex doesn't exit (as bpf_redirect_map() does), because we don't have a reference to the network namespace of the ingress device at the time the helper is called. So we have to leave it as-is and keep the device lookup in xdp_do_redirect_slow(). Since this leaves less reason to have the non-map redirect code in a separate function, so we get rid of the xdp_do_redirect_slow() function entirely. This does lose us the tracepoint disambiguation, but fortunately the xdp_redirect and xdp_redirect_map tracepoints use the same tracepoint entry structures. This means both can contain a map index, so we can just amend the tracepoint definitions so we always emit the xdp_redirect(_err) tracepoints, but with the map ID only populated if a map is present. This means we retire the xdp_redirect_map(_err) tracepoints entirely, but keep the definitions around in case someone is still listening for them. With this change, the performance of the xdp_redirect sample program goes from 5Mpps to 8.4Mpps (a 68% increase). Since the flush functions are no longer map-specific, rename the flush() functions to drop _map from their names. One of the renamed functions is the xdp_do_flush_map() callback used in all the xdp-enabled drivers. To keep from having to update all drivers, use a #define to keep the old name working, and only update the virtual drivers in this patch. Signed-off-by: Toke Høiland-Jørgensen Signed-off-by: Alexei Starovoitov Acked-by: John Fastabend Link: https://lore.kernel.org/bpf/157918768505.1458396.17518057312953572912.stgit@toke.dk --- drivers/net/tun.c | 4 +- drivers/net/veth.c | 2 +- drivers/net/virtio_net.c | 2 +- include/linux/bpf.h | 13 +++++- include/linux/filter.h | 10 ++++- include/trace/events/xdp.h | 101 ++++++++++++++++++++------------------------- kernel/bpf/devmap.c | 32 +++++++++----- net/core/filter.c | 90 +++++++++------------------------------- 8 files changed, 108 insertions(+), 146 deletions(-) (limited to 'include') diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 683d371e6e82..3a5a6c655dda 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1718,7 +1718,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, if (err < 0) goto err_xdp; if (err == XDP_REDIRECT) - xdp_do_flush_map(); + xdp_do_flush(); if (err != XDP_PASS) goto out; @@ -2549,7 +2549,7 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) } if (flush) - xdp_do_flush_map(); + xdp_do_flush(); rcu_read_unlock(); local_bh_enable(); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index a552df37a347..1c89017beebb 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -769,7 +769,7 @@ static int veth_poll(struct napi_struct *napi, int budget) if (xdp_xmit & VETH_XDP_TX) veth_xdp_flush(rq->dev, &bq); if (xdp_xmit & VETH_XDP_REDIR) - xdp_do_flush_map(); + xdp_do_flush(); xdp_clear_return_frame_no_direct(); return done; diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 4d7d5434cc5d..c458cd313281 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1432,7 +1432,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget) virtqueue_napi_complete(napi, rq->vq, received); if (xdp_xmit & VIRTIO_XDP_REDIR) - xdp_do_flush_map(); + xdp_do_flush(); if (xdp_xmit & VIRTIO_XDP_TX) { sq = virtnet_xdp_sq(vi); diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 3517e32149a4..8e3b8f4ad183 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1056,7 +1056,9 @@ struct sk_buff; struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key); struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key); -void __dev_map_flush(void); +void __dev_flush(void); +int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, + struct net_device *dev_rx); int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, struct net_device *dev_rx); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, @@ -1169,13 +1171,20 @@ static inline struct net_device *__dev_map_hash_lookup_elem(struct bpf_map *map return NULL; } -static inline void __dev_map_flush(void) +static inline void __dev_flush(void) { } struct xdp_buff; struct bpf_dtab_netdev; +static inline +int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, + struct net_device *dev_rx) +{ + return 0; +} + static inline int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, struct net_device *dev_rx) diff --git a/include/linux/filter.h b/include/linux/filter.h index a366a0b64a57..f349e2c0884c 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -918,7 +918,7 @@ static inline int xdp_ok_fwd_dev(const struct net_device *fwd, return 0; } -/* The pair of xdp_do_redirect and xdp_do_flush_map MUST be called in the +/* The pair of xdp_do_redirect and xdp_do_flush MUST be called in the * same cpu context. Further for best results no more than a single map * for the do_redirect/do_flush pair should be used. This limitation is * because we only track one map and force a flush when the map changes. @@ -929,7 +929,13 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *prog); -void xdp_do_flush_map(void); +void xdp_do_flush(void); + +/* The xdp_do_flush_map() helper has been renamed to drop the _map suffix, as + * it is no longer only flushing maps. Keep this define for compatibility + * until all drivers are updated - do not use xdp_do_flush_map() in new code! + */ +#define xdp_do_flush_map xdp_do_flush void bpf_warn_invalid_xdp_action(u32 act); diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h index 72bad13d4a3c..b680973687b4 100644 --- a/include/trace/events/xdp.h +++ b/include/trace/events/xdp.h @@ -79,14 +79,26 @@ TRACE_EVENT(xdp_bulk_tx, __entry->sent, __entry->drops, __entry->err) ); +#ifndef __DEVMAP_OBJ_TYPE +#define __DEVMAP_OBJ_TYPE +struct _bpf_dtab_netdev { + struct net_device *dev; +}; +#endif /* __DEVMAP_OBJ_TYPE */ + +#define devmap_ifindex(tgt, map) \ + (((map->map_type == BPF_MAP_TYPE_DEVMAP || \ + map->map_type == BPF_MAP_TYPE_DEVMAP_HASH)) ? \ + ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex : 0) + DECLARE_EVENT_CLASS(xdp_redirect_template, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, - int to_ifindex, int err, - const struct bpf_map *map, u32 map_index), + const void *tgt, int err, + const struct bpf_map *map, u32 index), - TP_ARGS(dev, xdp, to_ifindex, err, map, map_index), + TP_ARGS(dev, xdp, tgt, err, map, index), TP_STRUCT__entry( __field(int, prog_id) @@ -103,90 +115,65 @@ DECLARE_EVENT_CLASS(xdp_redirect_template, __entry->act = XDP_REDIRECT; __entry->ifindex = dev->ifindex; __entry->err = err; - __entry->to_ifindex = to_ifindex; + __entry->to_ifindex = map ? devmap_ifindex(tgt, map) : + index; __entry->map_id = map ? map->id : 0; - __entry->map_index = map_index; + __entry->map_index = map ? index : 0; ), - TP_printk("prog_id=%d action=%s ifindex=%d to_ifindex=%d err=%d", + TP_printk("prog_id=%d action=%s ifindex=%d to_ifindex=%d err=%d" + " map_id=%d map_index=%d", __entry->prog_id, __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB), __entry->ifindex, __entry->to_ifindex, - __entry->err) + __entry->err, __entry->map_id, __entry->map_index) ); DEFINE_EVENT(xdp_redirect_template, xdp_redirect, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, - int to_ifindex, int err, - const struct bpf_map *map, u32 map_index), - TP_ARGS(dev, xdp, to_ifindex, err, map, map_index) + const void *tgt, int err, + const struct bpf_map *map, u32 index), + TP_ARGS(dev, xdp, tgt, err, map, index) ); DEFINE_EVENT(xdp_redirect_template, xdp_redirect_err, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, - int to_ifindex, int err, - const struct bpf_map *map, u32 map_index), - TP_ARGS(dev, xdp, to_ifindex, err, map, map_index) + const void *tgt, int err, + const struct bpf_map *map, u32 index), + TP_ARGS(dev, xdp, tgt, err, map, index) ); #define _trace_xdp_redirect(dev, xdp, to) \ - trace_xdp_redirect(dev, xdp, to, 0, NULL, 0); + trace_xdp_redirect(dev, xdp, NULL, 0, NULL, to); #define _trace_xdp_redirect_err(dev, xdp, to, err) \ - trace_xdp_redirect_err(dev, xdp, to, err, NULL, 0); + trace_xdp_redirect_err(dev, xdp, NULL, err, NULL, to); + +#define _trace_xdp_redirect_map(dev, xdp, to, map, index) \ + trace_xdp_redirect(dev, xdp, to, 0, map, index); + +#define _trace_xdp_redirect_map_err(dev, xdp, to, map, index, err) \ + trace_xdp_redirect_err(dev, xdp, to, err, map, index); -DEFINE_EVENT_PRINT(xdp_redirect_template, xdp_redirect_map, +/* not used anymore, but kept around so as not to break old programs */ +DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, - int to_ifindex, int err, - const struct bpf_map *map, u32 map_index), - TP_ARGS(dev, xdp, to_ifindex, err, map, map_index), - TP_printk("prog_id=%d action=%s ifindex=%d to_ifindex=%d err=%d" - " map_id=%d map_index=%d", - __entry->prog_id, - __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB), - __entry->ifindex, __entry->to_ifindex, - __entry->err, - __entry->map_id, __entry->map_index) + const void *tgt, int err, + const struct bpf_map *map, u32 index), + TP_ARGS(dev, xdp, tgt, err, map, index) ); -DEFINE_EVENT_PRINT(xdp_redirect_template, xdp_redirect_map_err, +DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map_err, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, - int to_ifindex, int err, - const struct bpf_map *map, u32 map_index), - TP_ARGS(dev, xdp, to_ifindex, err, map, map_index), - TP_printk("prog_id=%d action=%s ifindex=%d to_ifindex=%d err=%d" - " map_id=%d map_index=%d", - __entry->prog_id, - __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB), - __entry->ifindex, __entry->to_ifindex, - __entry->err, - __entry->map_id, __entry->map_index) + const void *tgt, int err, + const struct bpf_map *map, u32 index), + TP_ARGS(dev, xdp, tgt, err, map, index) ); -#ifndef __DEVMAP_OBJ_TYPE -#define __DEVMAP_OBJ_TYPE -struct _bpf_dtab_netdev { - struct net_device *dev; -}; -#endif /* __DEVMAP_OBJ_TYPE */ - -#define devmap_ifindex(fwd, map) \ - ((map->map_type == BPF_MAP_TYPE_DEVMAP || \ - map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) ? \ - ((struct _bpf_dtab_netdev *)fwd)->dev->ifindex : 0) - -#define _trace_xdp_redirect_map(dev, xdp, fwd, map, idx) \ - trace_xdp_redirect_map(dev, xdp, devmap_ifindex(fwd, map), \ - 0, map, idx) - -#define _trace_xdp_redirect_map_err(dev, xdp, fwd, map, idx, err) \ - trace_xdp_redirect_map_err(dev, xdp, devmap_ifindex(fwd, map), \ - err, map, idx) - TRACE_EVENT(xdp_cpumap_kthread, TP_PROTO(int map_id, unsigned int processed, unsigned int drops, diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 030d125c3839..d5311009953f 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -81,7 +81,7 @@ struct bpf_dtab { u32 n_buckets; }; -static DEFINE_PER_CPU(struct list_head, dev_map_flush_list); +static DEFINE_PER_CPU(struct list_head, dev_flush_list); static DEFINE_SPINLOCK(dev_map_lock); static LIST_HEAD(dev_map_list); @@ -357,16 +357,16 @@ error: goto out; } -/* __dev_map_flush is called from xdp_do_flush_map() which _must_ be signaled +/* __dev_flush is called from xdp_do_flush() which _must_ be signaled * from the driver before returning from its napi->poll() routine. The poll() * routine is called either from busy_poll context or net_rx_action signaled * from NET_RX_SOFTIRQ. Either way the poll routine must complete before the * net device can be torn down. On devmap tear down we ensure the flush list * is empty before completing to ensure all flush operations have completed. */ -void __dev_map_flush(void) +void __dev_flush(void) { - struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); struct xdp_dev_bulk_queue *bq, *tmp; rcu_read_lock(); @@ -396,9 +396,8 @@ struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key) */ static int bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, struct net_device *dev_rx) - { - struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list); + struct list_head *flush_list = this_cpu_ptr(&dev_flush_list); struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) @@ -419,10 +418,9 @@ static int bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, return 0; } -int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, - struct net_device *dev_rx) +static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, + struct net_device *dev_rx) { - struct net_device *dev = dst->dev; struct xdp_frame *xdpf; int err; @@ -440,6 +438,20 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, return bq_enqueue(dev, xdpf, dev_rx); } +int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, + struct net_device *dev_rx) +{ + return __xdp_enqueue(dev, xdp, dev_rx); +} + +int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp, + struct net_device *dev_rx) +{ + struct net_device *dev = dst->dev; + + return __xdp_enqueue(dev, xdp, dev_rx); +} + int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, struct bpf_prog *xdp_prog) { @@ -762,7 +774,7 @@ static int __init dev_map_init(void) register_netdevice_notifier(&dev_map_notifier); for_each_possible_cpu(cpu) - INIT_LIST_HEAD(&per_cpu(dev_map_flush_list, cpu)); + INIT_LIST_HEAD(&per_cpu(dev_flush_list, cpu)); return 0; } diff --git a/net/core/filter.c b/net/core/filter.c index d6f58dc4e1c4..17de6747d9e3 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3458,58 +3458,6 @@ static const struct bpf_func_proto bpf_xdp_adjust_meta_proto = { .arg2_type = ARG_ANYTHING, }; -static int __bpf_tx_xdp(struct net_device *dev, - struct bpf_map *map, - struct xdp_buff *xdp, - u32 index) -{ - struct xdp_frame *xdpf; - int err, sent; - - if (!dev->netdev_ops->ndo_xdp_xmit) { - return -EOPNOTSUPP; - } - - err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data); - if (unlikely(err)) - return err; - - xdpf = convert_to_xdp_frame(xdp); - if (unlikely(!xdpf)) - return -EOVERFLOW; - - sent = dev->netdev_ops->ndo_xdp_xmit(dev, 1, &xdpf, XDP_XMIT_FLUSH); - if (sent <= 0) - return sent; - return 0; -} - -static noinline int -xdp_do_redirect_slow(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, struct bpf_redirect_info *ri) -{ - struct net_device *fwd; - u32 index = ri->tgt_index; - int err; - - fwd = dev_get_by_index_rcu(dev_net(dev), index); - ri->tgt_index = 0; - if (unlikely(!fwd)) { - err = -EINVAL; - goto err; - } - - err = __bpf_tx_xdp(fwd, NULL, xdp, 0); - if (unlikely(err)) - goto err; - - _trace_xdp_redirect(dev, xdp_prog, index); - return 0; -err: - _trace_xdp_redirect_err(dev, xdp_prog, index, err); - return err; -} - static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, struct bpf_map *map, struct xdp_buff *xdp) { @@ -3527,13 +3475,13 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, return 0; } -void xdp_do_flush_map(void) +void xdp_do_flush(void) { - __dev_map_flush(); + __dev_flush(); __cpu_map_flush(); __xsk_map_flush(); } -EXPORT_SYMBOL_GPL(xdp_do_flush_map); +EXPORT_SYMBOL_GPL(xdp_do_flush); static inline void *__xdp_map_lookup_elem(struct bpf_map *map, u32 index) { @@ -3568,10 +3516,11 @@ void bpf_clear_redirect_map(struct bpf_map *map) } } -static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, struct bpf_map *map, - struct bpf_redirect_info *ri) +int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) { + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_map *map = READ_ONCE(ri->map); u32 index = ri->tgt_index; void *fwd = ri->tgt_value; int err; @@ -3580,7 +3529,18 @@ static int xdp_do_redirect_map(struct net_device *dev, struct xdp_buff *xdp, ri->tgt_value = NULL; WRITE_ONCE(ri->map, NULL); - err = __bpf_tx_xdp_map(dev, fwd, map, xdp); + if (unlikely(!map)) { + fwd = dev_get_by_index_rcu(dev_net(dev), index); + if (unlikely(!fwd)) { + err = -EINVAL; + goto err; + } + + err = dev_xdp_enqueue(fwd, xdp, dev); + } else { + err = __bpf_tx_xdp_map(dev, fwd, map, xdp); + } + if (unlikely(err)) goto err; @@ -3590,18 +3550,6 @@ err: _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map, index, err); return err; } - -int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) -{ - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); - struct bpf_map *map = READ_ONCE(ri->map); - - if (likely(map)) - return xdp_do_redirect_map(dev, xdp, xdp_prog, map, ri); - - return xdp_do_redirect_slow(dev, xdp, xdp_prog, ri); -} EXPORT_SYMBOL_GPL(xdp_do_redirect); static int xdp_do_generic_redirect_map(struct net_device *dev, -- cgit v1.2.3-71-gd317 From 58aa94f922c1b44e0919d1814d2eab5b9e8bf945 Mon Sep 17 00:00:00 2001 From: Jesper Dangaard Brouer Date: Thu, 16 Jan 2020 16:14:46 +0100 Subject: devmap: Adjust tracepoint for map-less queue flush MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Now that we don't have a reference to a devmap when flushing the device bulk queue, let's change the the devmap_xmit tracepoint to remote the map_id and map_index fields entirely. Rearrange the fields so 'drops' and 'sent' stay in the same position in the tracepoint struct, to make it possible for the xdp_monitor utility to read both the old and the new format. Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Toke Høiland-Jørgensen Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/157918768613.1458396.9165902403373826572.stgit@toke.dk --- include/trace/events/xdp.h | 29 ++++++++++++----------------- kernel/bpf/devmap.c | 2 +- samples/bpf/xdp_monitor_kern.c | 8 +++----- 3 files changed, 16 insertions(+), 23 deletions(-) (limited to 'include') diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h index b680973687b4..b95d65e8c628 100644 --- a/include/trace/events/xdp.h +++ b/include/trace/events/xdp.h @@ -246,43 +246,38 @@ TRACE_EVENT(xdp_cpumap_enqueue, TRACE_EVENT(xdp_devmap_xmit, - TP_PROTO(const struct bpf_map *map, u32 map_index, - int sent, int drops, - const struct net_device *from_dev, - const struct net_device *to_dev, int err), + TP_PROTO(const struct net_device *from_dev, + const struct net_device *to_dev, + int sent, int drops, int err), - TP_ARGS(map, map_index, sent, drops, from_dev, to_dev, err), + TP_ARGS(from_dev, to_dev, sent, drops, err), TP_STRUCT__entry( - __field(int, map_id) + __field(int, from_ifindex) __field(u32, act) - __field(u32, map_index) + __field(int, to_ifindex) __field(int, drops) __field(int, sent) - __field(int, from_ifindex) - __field(int, to_ifindex) __field(int, err) ), TP_fast_assign( - __entry->map_id = map ? map->id : 0; + __entry->from_ifindex = from_dev->ifindex; __entry->act = XDP_REDIRECT; - __entry->map_index = map_index; + __entry->to_ifindex = to_dev->ifindex; __entry->drops = drops; __entry->sent = sent; - __entry->from_ifindex = from_dev->ifindex; - __entry->to_ifindex = to_dev->ifindex; __entry->err = err; ), TP_printk("ndo_xdp_xmit" - " map_id=%d map_index=%d action=%s" + " from_ifindex=%d to_ifindex=%d action=%s" " sent=%d drops=%d" - " from_ifindex=%d to_ifindex=%d err=%d", - __entry->map_id, __entry->map_index, + " err=%d", + __entry->from_ifindex, __entry->to_ifindex, __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB), __entry->sent, __entry->drops, - __entry->from_ifindex, __entry->to_ifindex, __entry->err) + __entry->err) ); /* Expect users already include , but not xdp_priv.h */ diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index d5311009953f..de630f980282 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -340,7 +340,7 @@ static int bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags) out: bq->count = 0; - trace_xdp_devmap_xmit(NULL, 0, sent, drops, bq->dev_rx, dev, err); + trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, drops, err); bq->dev_rx = NULL; __list_del_clearprev(&bq->flush_node); return 0; diff --git a/samples/bpf/xdp_monitor_kern.c b/samples/bpf/xdp_monitor_kern.c index ad10fe700d7d..39458a44472e 100644 --- a/samples/bpf/xdp_monitor_kern.c +++ b/samples/bpf/xdp_monitor_kern.c @@ -222,14 +222,12 @@ struct bpf_map_def SEC("maps") devmap_xmit_cnt = { */ struct devmap_xmit_ctx { u64 __pad; // First 8 bytes are not accessible by bpf code - int map_id; // offset:8; size:4; signed:1; + int from_ifindex; // offset:8; size:4; signed:1; u32 act; // offset:12; size:4; signed:0; - u32 map_index; // offset:16; size:4; signed:0; + int to_ifindex; // offset:16; size:4; signed:1; int drops; // offset:20; size:4; signed:1; int sent; // offset:24; size:4; signed:1; - int from_ifindex; // offset:28; size:4; signed:1; - int to_ifindex; // offset:32; size:4; signed:1; - int err; // offset:36; size:4; signed:1; + int err; // offset:28; size:4; signed:1; }; SEC("tracepoint/xdp/xdp_devmap_xmit") -- cgit v1.2.3-71-gd317 From 080bb352fad00d04995102f681b134e3754bfb6e Mon Sep 17 00:00:00 2001 From: Florian Fainelli Date: Wed, 15 Jan 2020 20:48:50 -0800 Subject: net: phy: Maintain MDIO device and bus statistics We maintain global statistics for an entire MDIO bus, as well as broken down, per MDIO bus address statistics. Given that it is possible for MDIO devices such as switches to access MDIO bus addresses for which there is not a mdio_device instance created (therefore not a a corresponding device directory in sysfs either), we also maintain per-address statistics under the statistics folder. The layout looks like this: /sys/class/mdio_bus/../statistics/ transfers errrors writes reads transfers_ errors_ writes_ reads_ When a mdio_device instance is registered, a statistics/ folder is created with the tranfers, errors, writes and reads attributes which point to the appropriate MDIO bus statistics structure. Statistics are 64-bit unsigned quantities and maintained through the u64_stats_sync.h helper functions. Signed-off-by: Florian Fainelli Tested-by: Andrew Lunn Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/ABI/testing/sysfs-bus-mdio | 63 ++++++++ drivers/net/phy/mdio_bus.c | 251 ++++++++++++++++++++++++++++++- include/linux/phy.h | 11 ++ 3 files changed, 323 insertions(+), 2 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-bus-mdio (limited to 'include') diff --git a/Documentation/ABI/testing/sysfs-bus-mdio b/Documentation/ABI/testing/sysfs-bus-mdio new file mode 100644 index 000000000000..da86efc7781b --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-mdio @@ -0,0 +1,63 @@ +What: /sys/bus/mdio_bus/devices/.../statistics/ +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + This folder contains statistics about global and per + MDIO bus address statistics. + +What: /sys/bus/mdio_bus/devices/.../statistics/transfers +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of transfers for this MDIO bus. + +What: /sys/bus/mdio_bus/devices/.../statistics/errors +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of transfer errors for this MDIO bus. + +What: /sys/bus/mdio_bus/devices/.../statistics/writes +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of write transactions for this MDIO bus. + +What: /sys/bus/mdio_bus/devices/.../statistics/reads +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of read transactions for this MDIO bus. + +What: /sys/bus/mdio_bus/devices/.../statistics/transfers_ +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of transfers for this MDIO bus address. + +What: /sys/bus/mdio_bus/devices/.../statistics/errors_ +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of transfer errors for this MDIO bus address. + +What: /sys/bus/mdio_bus/devices/.../statistics/writes_ +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of write transactions for this MDIO bus address. + +What: /sys/bus/mdio_bus/devices/.../statistics/reads_ +Date: January 2020 +KernelVersion: 5.6 +Contact: netdev@vger.kernel.org +Description: + Total number of read transactions for this MDIO bus address. diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index 8d753bb07227..9bb9f37f21dc 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -158,9 +158,11 @@ struct mii_bus *mdiobus_alloc_size(size_t size) if (size) bus->priv = (void *)bus + aligned_size; - /* Initialise the interrupts to polling */ - for (i = 0; i < PHY_MAX_ADDR; i++) + /* Initialise the interrupts to polling and 64-bit seqcounts */ + for (i = 0; i < PHY_MAX_ADDR; i++) { bus->irq[i] = PHY_POLL; + u64_stats_init(&bus->stats[i].syncp); + } return bus; } @@ -249,9 +251,215 @@ static void mdiobus_release(struct device *d) kfree(bus); } +struct mdio_bus_stat_attr { + int addr; + unsigned int field_offset; +}; + +static u64 mdio_bus_get_stat(struct mdio_bus_stats *s, unsigned int offset) +{ + const char *p = (const char *)s + offset; + unsigned int start; + u64 val = 0; + + do { + start = u64_stats_fetch_begin(&s->syncp); + val = u64_stats_read((const u64_stats_t *)p); + } while (u64_stats_fetch_retry(&s->syncp, start)); + + return val; +} + +static u64 mdio_bus_get_global_stat(struct mii_bus *bus, unsigned int offset) +{ + unsigned int i; + u64 val = 0; + + for (i = 0; i < PHY_MAX_ADDR; i++) + val += mdio_bus_get_stat(&bus->stats[i], offset); + + return val; +} + +static ssize_t mdio_bus_stat_field_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct mii_bus *bus = to_mii_bus(dev); + struct mdio_bus_stat_attr *sattr; + struct dev_ext_attribute *eattr; + u64 val; + + eattr = container_of(attr, struct dev_ext_attribute, attr); + sattr = eattr->var; + + if (sattr->addr < 0) + val = mdio_bus_get_global_stat(bus, sattr->field_offset); + else + val = mdio_bus_get_stat(&bus->stats[sattr->addr], + sattr->field_offset); + + return sprintf(buf, "%llu\n", val); +} + +static ssize_t mdio_bus_device_stat_field_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct mdio_device *mdiodev = to_mdio_device(dev); + struct mii_bus *bus = mdiodev->bus; + struct mdio_bus_stat_attr *sattr; + struct dev_ext_attribute *eattr; + int addr = mdiodev->addr; + u64 val; + + eattr = container_of(attr, struct dev_ext_attribute, attr); + sattr = eattr->var; + + val = mdio_bus_get_stat(&bus->stats[addr], sattr->field_offset); + + return sprintf(buf, "%llu\n", val); +} + +#define MDIO_BUS_STATS_ATTR_DECL(field, file) \ +static struct dev_ext_attribute dev_attr_mdio_bus_##field = { \ + .attr = { .attr = { .name = file, .mode = 0444 }, \ + .show = mdio_bus_stat_field_show, \ + }, \ + .var = &((struct mdio_bus_stat_attr) { \ + -1, offsetof(struct mdio_bus_stats, field) \ + }), \ +}; \ +static struct dev_ext_attribute dev_attr_mdio_bus_device_##field = { \ + .attr = { .attr = { .name = file, .mode = 0444 }, \ + .show = mdio_bus_device_stat_field_show, \ + }, \ + .var = &((struct mdio_bus_stat_attr) { \ + -1, offsetof(struct mdio_bus_stats, field) \ + }), \ +}; + +#define MDIO_BUS_STATS_ATTR(field) \ + MDIO_BUS_STATS_ATTR_DECL(field, __stringify(field)) + +MDIO_BUS_STATS_ATTR(transfers); +MDIO_BUS_STATS_ATTR(errors); +MDIO_BUS_STATS_ATTR(writes); +MDIO_BUS_STATS_ATTR(reads); + +#define MDIO_BUS_STATS_ADDR_ATTR_DECL(field, addr, file) \ +static struct dev_ext_attribute dev_attr_mdio_bus_addr_##field##_##addr = { \ + .attr = { .attr = { .name = file, .mode = 0444 }, \ + .show = mdio_bus_stat_field_show, \ + }, \ + .var = &((struct mdio_bus_stat_attr) { \ + addr, offsetof(struct mdio_bus_stats, field) \ + }), \ +} + +#define MDIO_BUS_STATS_ADDR_ATTR(field, addr) \ + MDIO_BUS_STATS_ADDR_ATTR_DECL(field, addr, \ + __stringify(field) "_" __stringify(addr)) + +#define MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(addr) \ + MDIO_BUS_STATS_ADDR_ATTR(transfers, addr); \ + MDIO_BUS_STATS_ADDR_ATTR(errors, addr); \ + MDIO_BUS_STATS_ADDR_ATTR(writes, addr); \ + MDIO_BUS_STATS_ADDR_ATTR(reads, addr) \ + +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(0); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(1); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(2); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(3); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(4); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(5); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(6); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(7); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(8); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(9); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(10); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(11); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(12); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(13); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(14); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(15); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(16); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(17); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(18); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(19); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(20); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(21); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(22); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(23); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(24); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(25); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(26); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(27); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(28); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(29); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(30); +MDIO_BUS_STATS_ADDR_ATTR_GROUP_DECL(31); + +#define MDIO_BUS_STATS_ADDR_ATTR_GROUP(addr) \ + &dev_attr_mdio_bus_addr_transfers_##addr.attr.attr, \ + &dev_attr_mdio_bus_addr_errors_##addr.attr.attr, \ + &dev_attr_mdio_bus_addr_writes_##addr.attr.attr, \ + &dev_attr_mdio_bus_addr_reads_##addr.attr.attr \ + +static struct attribute *mdio_bus_statistics_attrs[] = { + &dev_attr_mdio_bus_transfers.attr.attr, + &dev_attr_mdio_bus_errors.attr.attr, + &dev_attr_mdio_bus_writes.attr.attr, + &dev_attr_mdio_bus_reads.attr.attr, + MDIO_BUS_STATS_ADDR_ATTR_GROUP(0), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(1), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(2), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(3), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(4), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(5), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(6), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(7), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(8), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(9), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(10), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(11), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(12), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(13), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(14), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(15), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(16), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(17), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(18), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(19), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(20), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(21), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(22), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(23), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(24), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(25), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(26), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(27), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(28), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(29), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(30), + MDIO_BUS_STATS_ADDR_ATTR_GROUP(31), + NULL, +}; + +static const struct attribute_group mdio_bus_statistics_group = { + .name = "statistics", + .attrs = mdio_bus_statistics_attrs, +}; + +static const struct attribute_group *mdio_bus_groups[] = { + &mdio_bus_statistics_group, + NULL, +}; + static struct class mdio_bus_class = { .name = "mdio_bus", .dev_release = mdiobus_release, + .dev_groups = mdio_bus_groups, }; #if IS_ENABLED(CONFIG_OF_MDIO) @@ -530,6 +738,24 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr) } EXPORT_SYMBOL(mdiobus_scan); +static void mdiobus_stats_acct(struct mdio_bus_stats *stats, bool op, int ret) +{ + u64_stats_update_begin(&stats->syncp); + + u64_stats_inc(&stats->transfers); + if (ret < 0) { + u64_stats_inc(&stats->errors); + goto out; + } + + if (op) + u64_stats_inc(&stats->reads); + else + u64_stats_inc(&stats->writes); +out: + u64_stats_update_end(&stats->syncp); +} + /** * __mdiobus_read - Unlocked version of the mdiobus_read function * @bus: the mii_bus struct @@ -549,6 +775,7 @@ int __mdiobus_read(struct mii_bus *bus, int addr, u32 regnum) retval = bus->read(bus, addr, regnum); trace_mdio_access(bus, 1, addr, regnum, retval, retval); + mdiobus_stats_acct(&bus->stats[addr], true, retval); return retval; } @@ -574,6 +801,7 @@ int __mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val) err = bus->write(bus, addr, regnum, val); trace_mdio_access(bus, 0, addr, regnum, val, err); + mdiobus_stats_acct(&bus->stats[addr], false, err); return err; } @@ -719,8 +947,27 @@ static int mdio_uevent(struct device *dev, struct kobj_uevent_env *env) return 0; } +static struct attribute *mdio_bus_device_statistics_attrs[] = { + &dev_attr_mdio_bus_device_transfers.attr.attr, + &dev_attr_mdio_bus_device_errors.attr.attr, + &dev_attr_mdio_bus_device_writes.attr.attr, + &dev_attr_mdio_bus_device_reads.attr.attr, + NULL, +}; + +static const struct attribute_group mdio_bus_device_statistics_group = { + .name = "statistics", + .attrs = mdio_bus_device_statistics_attrs, +}; + +static const struct attribute_group *mdio_bus_dev_groups[] = { + &mdio_bus_device_statistics_group, + NULL, +}; + struct bus_type mdio_bus_type = { .name = "mdio_bus", + .dev_groups = mdio_bus_dev_groups, .match = mdio_bus_match, .uevent = mdio_uevent, }; diff --git a/include/linux/phy.h b/include/linux/phy.h index 2929d0bc307f..99a87f02667f 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -22,6 +22,7 @@ #include #include #include +#include #include @@ -212,6 +213,15 @@ struct sfp_bus; struct sfp_upstream_ops; struct sk_buff; +struct mdio_bus_stats { + u64_stats_t transfers; + u64_stats_t errors; + u64_stats_t writes; + u64_stats_t reads; + /* Must be last, add new statistics above */ + struct u64_stats_sync syncp; +}; + /* * The Bus class for PHYs. Devices which provide access to * PHYs should register using this structure @@ -224,6 +234,7 @@ struct mii_bus { int (*read)(struct mii_bus *bus, int addr, int regnum); int (*write)(struct mii_bus *bus, int addr, int regnum, u16 val); int (*reset)(struct mii_bus *bus); + struct mdio_bus_stats stats[PHY_MAX_ADDR]; /* * A lock to ensure that only one thing can read/write -- cgit v1.2.3-71-gd317 From 56f200c78ce4d94680a27a1ce97a29ebeb4f23e1 Mon Sep 17 00:00:00 2001 From: Guillaume Nault Date: Thu, 16 Jan 2020 21:16:46 +0100 Subject: netns: Constify exported functions Mark function parameters as 'const' where possible. Signed-off-by: Guillaume Nault Acked-by: Nicolas Dichtel Signed-off-by: David S. Miller --- include/net/net_namespace.h | 10 +++++----- net/core/net_namespace.c | 6 +++--- 2 files changed, 8 insertions(+), 8 deletions(-) (limited to 'include') diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h index b8ceaf0cd997..854d39ef1ca3 100644 --- a/include/net/net_namespace.h +++ b/include/net/net_namespace.h @@ -347,9 +347,9 @@ static inline struct net *read_pnet(const possible_net_t *pnet) #endif int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp); -int peernet2id(struct net *net, struct net *peer); -bool peernet_has_id(struct net *net, struct net *peer); -struct net *get_net_ns_by_id(struct net *net, int id); +int peernet2id(const struct net *net, struct net *peer); +bool peernet_has_id(const struct net *net, struct net *peer); +struct net *get_net_ns_by_id(const struct net *net, int id); struct pernet_operations { struct list_head list; @@ -427,7 +427,7 @@ static inline void unregister_net_sysctl_table(struct ctl_table_header *header) } #endif -static inline int rt_genid_ipv4(struct net *net) +static inline int rt_genid_ipv4(const struct net *net) { return atomic_read(&net->ipv4.rt_genid); } @@ -459,7 +459,7 @@ static inline void rt_genid_bump_all(struct net *net) rt_genid_bump_ipv6(net); } -static inline int fnhe_genid(struct net *net) +static inline int fnhe_genid(const struct net *net) { return atomic_read(&net->fnhe_genid); } diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c index 6412c1fbfcb5..757cc1d084e7 100644 --- a/net/core/net_namespace.c +++ b/net/core/net_namespace.c @@ -268,7 +268,7 @@ int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp) EXPORT_SYMBOL_GPL(peernet2id_alloc); /* This function returns, if assigned, the id of a peer netns. */ -int peernet2id(struct net *net, struct net *peer) +int peernet2id(const struct net *net, struct net *peer) { int id; @@ -283,12 +283,12 @@ EXPORT_SYMBOL(peernet2id); /* This function returns true is the peer netns has an id assigned into the * current netns. */ -bool peernet_has_id(struct net *net, struct net *peer) +bool peernet_has_id(const struct net *net, struct net *peer) { return peernet2id(net, peer) >= 0; } -struct net *get_net_ns_by_id(struct net *net, int id) +struct net *get_net_ns_by_id(const struct net *net, int id) { struct net *peer; -- cgit v1.2.3-71-gd317 From 95f0ead8f04bec18e474594ef585f3734bd85b4c Mon Sep 17 00:00:00 2001 From: Amit Cohen Date: Sun, 19 Jan 2020 15:00:48 +0200 Subject: devlink: Add non-routable packet trap Add packet trap that can report packets that reached the router, but are non-routable. For example, IGMP queries can be flooded by the device in layer 2 and reach the router. Such packets should not be routed and instead dropped. Signed-off-by: Amit Cohen Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller --- Documentation/networking/devlink/devlink-trap.rst | 6 ++++++ include/net/devlink.h | 3 +++ net/core/devlink.c | 1 + 3 files changed, 10 insertions(+) (limited to 'include') diff --git a/Documentation/networking/devlink/devlink-trap.rst b/Documentation/networking/devlink/devlink-trap.rst index 68245ea387ad..a62e1d6eb3e4 100644 --- a/Documentation/networking/devlink/devlink-trap.rst +++ b/Documentation/networking/devlink/devlink-trap.rst @@ -223,6 +223,12 @@ be added to the following table: * - ``ipv6_lpm_miss`` - ``exception`` - Traps unicast IPv6 packets that did not match any route + * - ``non_routable_packet`` + - ``drop`` + - Traps packets that the device decided to drop because they are not + supposed to be routed. For example, IGMP queries can be flooded by the + device in layer 2 and reach the router. Such packets should not be + routed and instead dropped Driver-specific Packet Traps ============================ diff --git a/include/net/devlink.h b/include/net/devlink.h index a6856f1d5d1f..08b757753e1c 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -591,6 +591,7 @@ enum devlink_trap_generic_id { DEVLINK_TRAP_GENERIC_ID_REJECT_ROUTE, DEVLINK_TRAP_GENERIC_ID_IPV4_LPM_UNICAST_MISS, DEVLINK_TRAP_GENERIC_ID_IPV6_LPM_UNICAST_MISS, + DEVLINK_TRAP_GENERIC_ID_NON_ROUTABLE, /* Add new generic trap IDs above */ __DEVLINK_TRAP_GENERIC_ID_MAX, @@ -659,6 +660,8 @@ enum devlink_trap_group_generic_id { "ipv4_lpm_miss" #define DEVLINK_TRAP_GENERIC_NAME_IPV6_LPM_UNICAST_MISS \ "ipv6_lpm_miss" +#define DEVLINK_TRAP_GENERIC_NAME_NON_ROUTABLE \ + "non_routable_packet" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_L2_DROPS \ "l2_drops" diff --git a/net/core/devlink.c b/net/core/devlink.c index d30aa47052aa..c10e38d724bc 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -7706,6 +7706,7 @@ static const struct devlink_trap devlink_trap_generic[] = { DEVLINK_TRAP(REJECT_ROUTE, EXCEPTION), DEVLINK_TRAP(IPV4_LPM_UNICAST_MISS, EXCEPTION), DEVLINK_TRAP(IPV6_LPM_UNICAST_MISS, EXCEPTION), + DEVLINK_TRAP(NON_ROUTABLE, DROP), }; #define DEVLINK_TRAP_GROUP(_id) \ -- cgit v1.2.3-71-gd317 From 13c056ec7d006b11557cebd9f1803edd646d2876 Mon Sep 17 00:00:00 2001 From: Amit Cohen Date: Sun, 19 Jan 2020 15:00:54 +0200 Subject: devlink: Add tunnel generic packet traps Add packet traps that can report packets that were dropped during tunnel decapsulation. Signed-off-by: Amit Cohen Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller --- Documentation/networking/devlink/devlink-trap.rst | 8 ++++++++ include/net/devlink.h | 6 ++++++ net/core/devlink.c | 2 ++ 3 files changed, 16 insertions(+) (limited to 'include') diff --git a/Documentation/networking/devlink/devlink-trap.rst b/Documentation/networking/devlink/devlink-trap.rst index a62e1d6eb3e4..014f0a34c0e4 100644 --- a/Documentation/networking/devlink/devlink-trap.rst +++ b/Documentation/networking/devlink/devlink-trap.rst @@ -229,6 +229,11 @@ be added to the following table: supposed to be routed. For example, IGMP queries can be flooded by the device in layer 2 and reach the router. Such packets should not be routed and instead dropped + * - ``decap_error`` + - ``exception`` + - Traps NVE and IPinIP packets that the device decided to drop because of + failure during decapsulation (e.g., packet being too short, reserved + bits set in VXLAN header) Driver-specific Packet Traps ============================ @@ -265,6 +270,9 @@ narrow. The description of these groups must be added to the following table: * - ``buffer_drops`` - Contains packet traps for packets that were dropped by the device due to an enqueue decision + * - ``tunnel_drops`` + - Contains packet traps for packets that were dropped by the device during + tunnel encapsulation / decapsulation Testing ======= diff --git a/include/net/devlink.h b/include/net/devlink.h index 08b757753e1c..455282a4b714 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -592,6 +592,7 @@ enum devlink_trap_generic_id { DEVLINK_TRAP_GENERIC_ID_IPV4_LPM_UNICAST_MISS, DEVLINK_TRAP_GENERIC_ID_IPV6_LPM_UNICAST_MISS, DEVLINK_TRAP_GENERIC_ID_NON_ROUTABLE, + DEVLINK_TRAP_GENERIC_ID_DECAP_ERROR, /* Add new generic trap IDs above */ __DEVLINK_TRAP_GENERIC_ID_MAX, @@ -605,6 +606,7 @@ enum devlink_trap_group_generic_id { DEVLINK_TRAP_GROUP_GENERIC_ID_L2_DROPS, DEVLINK_TRAP_GROUP_GENERIC_ID_L3_DROPS, DEVLINK_TRAP_GROUP_GENERIC_ID_BUFFER_DROPS, + DEVLINK_TRAP_GROUP_GENERIC_ID_TUNNEL_DROPS, /* Add new generic trap group IDs above */ __DEVLINK_TRAP_GROUP_GENERIC_ID_MAX, @@ -662,6 +664,8 @@ enum devlink_trap_group_generic_id { "ipv6_lpm_miss" #define DEVLINK_TRAP_GENERIC_NAME_NON_ROUTABLE \ "non_routable_packet" +#define DEVLINK_TRAP_GENERIC_NAME_DECAP_ERROR \ + "decap_error" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_L2_DROPS \ "l2_drops" @@ -669,6 +673,8 @@ enum devlink_trap_group_generic_id { "l3_drops" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_BUFFER_DROPS \ "buffer_drops" +#define DEVLINK_TRAP_GROUP_GENERIC_NAME_TUNNEL_DROPS \ + "tunnel_drops" #define DEVLINK_TRAP_GENERIC(_type, _init_action, _id, _group, _metadata_cap) \ { \ diff --git a/net/core/devlink.c b/net/core/devlink.c index c10e38d724bc..af85fcd9b01e 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -7707,6 +7707,7 @@ static const struct devlink_trap devlink_trap_generic[] = { DEVLINK_TRAP(IPV4_LPM_UNICAST_MISS, EXCEPTION), DEVLINK_TRAP(IPV6_LPM_UNICAST_MISS, EXCEPTION), DEVLINK_TRAP(NON_ROUTABLE, DROP), + DEVLINK_TRAP(DECAP_ERROR, EXCEPTION), }; #define DEVLINK_TRAP_GROUP(_id) \ @@ -7719,6 +7720,7 @@ static const struct devlink_trap_group devlink_trap_group_generic[] = { DEVLINK_TRAP_GROUP(L2_DROPS), DEVLINK_TRAP_GROUP(L3_DROPS), DEVLINK_TRAP_GROUP(BUFFER_DROPS), + DEVLINK_TRAP_GROUP(TUNNEL_DROPS), }; static int devlink_trap_generic_verify(const struct devlink_trap *trap) -- cgit v1.2.3-71-gd317 From c3cae4916e57d2f0364d5e7769218547fb1b7c60 Mon Sep 17 00:00:00 2001 From: Amit Cohen Date: Sun, 19 Jan 2020 15:00:58 +0200 Subject: devlink: Add overlay source MAC is multicast trap Add packet trap that can report NVE packets that the device decided to drop because their overlay source MAC is multicast. Signed-off-by: Amit Cohen Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller --- Documentation/networking/devlink/devlink-trap.rst | 4 ++++ include/net/devlink.h | 3 +++ net/core/devlink.c | 1 + 3 files changed, 8 insertions(+) (limited to 'include') diff --git a/Documentation/networking/devlink/devlink-trap.rst b/Documentation/networking/devlink/devlink-trap.rst index 014f0a34c0e4..47a429bb8658 100644 --- a/Documentation/networking/devlink/devlink-trap.rst +++ b/Documentation/networking/devlink/devlink-trap.rst @@ -234,6 +234,10 @@ be added to the following table: - Traps NVE and IPinIP packets that the device decided to drop because of failure during decapsulation (e.g., packet being too short, reserved bits set in VXLAN header) + * - ``overlay_smac_is_mc`` + - ``drop`` + - Traps NVE packets that the device decided to drop because their overlay + source MAC is multicast Driver-specific Packet Traps ============================ diff --git a/include/net/devlink.h b/include/net/devlink.h index 455282a4b714..2813fd06ee89 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -593,6 +593,7 @@ enum devlink_trap_generic_id { DEVLINK_TRAP_GENERIC_ID_IPV6_LPM_UNICAST_MISS, DEVLINK_TRAP_GENERIC_ID_NON_ROUTABLE, DEVLINK_TRAP_GENERIC_ID_DECAP_ERROR, + DEVLINK_TRAP_GENERIC_ID_OVERLAY_SMAC_MC, /* Add new generic trap IDs above */ __DEVLINK_TRAP_GENERIC_ID_MAX, @@ -666,6 +667,8 @@ enum devlink_trap_group_generic_id { "non_routable_packet" #define DEVLINK_TRAP_GENERIC_NAME_DECAP_ERROR \ "decap_error" +#define DEVLINK_TRAP_GENERIC_NAME_OVERLAY_SMAC_MC \ + "overlay_smac_is_mc" #define DEVLINK_TRAP_GROUP_GENERIC_NAME_L2_DROPS \ "l2_drops" diff --git a/net/core/devlink.c b/net/core/devlink.c index af85fcd9b01e..e5b19bd2cbe2 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -7708,6 +7708,7 @@ static const struct devlink_trap devlink_trap_generic[] = { DEVLINK_TRAP(IPV6_LPM_UNICAST_MISS, EXCEPTION), DEVLINK_TRAP(NON_ROUTABLE, DROP), DEVLINK_TRAP(DECAP_ERROR, EXCEPTION), + DEVLINK_TRAP(OVERLAY_SMAC_MC, DROP), }; #define DEVLINK_TRAP_GROUP(_id) \ -- cgit v1.2.3-71-gd317 From 2ab1d925aa4c0c179dd1eb492e8c03536972707b Mon Sep 17 00:00:00 2001 From: Heiner Kallweit Date: Sun, 19 Jan 2020 14:31:55 +0100 Subject: net: phy: add generic ndo_do_ioctl handler phy_do_ioctl A number of network drivers has the same glue code to use phy_mii_ioctl as ndo_do_ioctl handler. So let's add such a generic ndo_do_ioctl handler to phylib. Signed-off-by: Heiner Kallweit Signed-off-by: David S. Miller --- drivers/net/phy/phy.c | 15 +++++++++++++++ include/linux/phy.h | 1 + 2 files changed, 16 insertions(+) (limited to 'include') diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 541ed01496bf..da05b3480abb 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -432,6 +432,21 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) } EXPORT_SYMBOL(phy_mii_ioctl); +/** + * phy_do_ioctl - generic ndo_do_ioctl implementation + * @dev: the net_device struct + * @ifr: &struct ifreq for socket ioctl's + * @cmd: ioctl cmd to execute + */ +int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + if (!netif_running(dev) || !dev->phydev) + return -ENODEV; + + return phy_mii_ioctl(dev->phydev, ifr, cmd); +} +EXPORT_SYMBOL(phy_do_ioctl); + void phy_queue_state_machine(struct phy_device *phydev, unsigned long jiffies) { mod_delayed_work(system_power_efficient_wq, &phydev->state_queue, diff --git a/include/linux/phy.h b/include/linux/phy.h index 99a87f02667f..be6b3a1b03da 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1242,6 +1242,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev, int phy_ethtool_ksettings_set(struct phy_device *phydev, const struct ethtool_link_ksettings *cmd); int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd); +int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); void phy_request_interrupt(struct phy_device *phydev); void phy_free_interrupt(struct phy_device *phydev); void phy_print_status(struct phy_device *phydev); -- cgit v1.2.3-71-gd317 From 3231e5d2228a2078ce5982d63ea9a617e4972c00 Mon Sep 17 00:00:00 2001 From: Heiner Kallweit Date: Mon, 20 Jan 2020 22:16:07 +0100 Subject: net: phy: rename phy_do_ioctl to phy_do_ioctl_running We just added phy_do_ioctl, but it turned out that we need another version of this function that doesn't check whether net_device is running. So rename phy_do_ioctl to phy_do_ioctl_running. Signed-off-by: Heiner Kallweit Reviewed-by: Florian Fainelli Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- drivers/net/ethernet/realtek/r8169_main.c | 2 +- drivers/net/phy/phy.c | 6 +++--- include/linux/phy.h | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 175f951b6547..7a5fe11378aa 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -5158,7 +5158,7 @@ static const struct net_device_ops rtl_netdev_ops = { .ndo_fix_features = rtl8169_fix_features, .ndo_set_features = rtl8169_set_features, .ndo_set_mac_address = rtl_set_mac_address, - .ndo_do_ioctl = phy_do_ioctl, + .ndo_do_ioctl = phy_do_ioctl_running, .ndo_set_rx_mode = rtl_set_rx_mode, #ifdef CONFIG_NET_POLL_CONTROLLER .ndo_poll_controller = rtl8169_netpoll, diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index da05b3480abb..cf25fa3be123 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -433,19 +433,19 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) EXPORT_SYMBOL(phy_mii_ioctl); /** - * phy_do_ioctl - generic ndo_do_ioctl implementation + * phy_do_ioctl_running - generic ndo_do_ioctl implementation * @dev: the net_device struct * @ifr: &struct ifreq for socket ioctl's * @cmd: ioctl cmd to execute */ -int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd) { if (!netif_running(dev) || !dev->phydev) return -ENODEV; return phy_mii_ioctl(dev->phydev, ifr, cmd); } -EXPORT_SYMBOL(phy_do_ioctl); +EXPORT_SYMBOL(phy_do_ioctl_running); void phy_queue_state_machine(struct phy_device *phydev, unsigned long jiffies) { diff --git a/include/linux/phy.h b/include/linux/phy.h index be6b3a1b03da..f6e714da37d8 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1242,7 +1242,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev, int phy_ethtool_ksettings_set(struct phy_device *phydev, const struct ethtool_link_ksettings *cmd); int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd); -int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); +int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd); void phy_request_interrupt(struct phy_device *phydev); void phy_free_interrupt(struct phy_device *phydev); void phy_print_status(struct phy_device *phydev); -- cgit v1.2.3-71-gd317 From bbbf8430afe6906abbf879352fe10d24d380e588 Mon Sep 17 00:00:00 2001 From: Heiner Kallweit Date: Mon, 20 Jan 2020 22:17:11 +0100 Subject: net: phy: add new version of phy_do_ioctl Add a new version of phy_do_ioctl that doesn't check whether net_device is running. It will typically be used if suitable drivers attach the PHY in probe already. Signed-off-by: Heiner Kallweit Reviewed-by: Florian Fainelli Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- drivers/net/phy/phy.c | 16 +++++++++++++--- include/linux/phy.h | 1 + 2 files changed, 14 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index cf25fa3be123..d76e038cf2cb 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -433,18 +433,28 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd) EXPORT_SYMBOL(phy_mii_ioctl); /** - * phy_do_ioctl_running - generic ndo_do_ioctl implementation + * phy_do_ioctl - generic ndo_do_ioctl implementation * @dev: the net_device struct * @ifr: &struct ifreq for socket ioctl's * @cmd: ioctl cmd to execute */ -int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd) +int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) { - if (!netif_running(dev) || !dev->phydev) + if (!dev->phydev) return -ENODEV; return phy_mii_ioctl(dev->phydev, ifr, cmd); } +EXPORT_SYMBOL(phy_do_ioctl); + +/* same as phy_do_ioctl, but ensures that net_device is running */ +int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + if (!netif_running(dev)) + return -ENODEV; + + return phy_do_ioctl(dev, ifr, cmd); +} EXPORT_SYMBOL(phy_do_ioctl_running); void phy_queue_state_machine(struct phy_device *phydev, unsigned long jiffies) diff --git a/include/linux/phy.h b/include/linux/phy.h index f6e714da37d8..c570e162e05e 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -1242,6 +1242,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev, int phy_ethtool_ksettings_set(struct phy_device *phydev, const struct ethtool_link_ksettings *cmd); int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd); +int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd); void phy_request_interrupt(struct phy_device *phydev); void phy_free_interrupt(struct phy_device *phydev); -- cgit v1.2.3-71-gd317 From f362e5fe0f1f9774b28c5b4c9b07b0168575cf6e Mon Sep 17 00:00:00 2001 From: Martin Schiller Date: Tue, 21 Jan 2020 07:00:33 +0100 Subject: wan/hdlc_x25: make lapb params configurable This enables you to configure mode (DTE/DCE), Modulo, Window, T1, T2, N2 via sethdlc (which needs to be patched as well). Signed-off-by: Martin Schiller Signed-off-by: David S. Miller --- drivers/net/wan/hdlc_x25.c | 80 +++++++++++++++++++++++++++++++++++++++-- include/uapi/linux/hdlc/ioctl.h | 9 +++++ include/uapi/linux/if.h | 1 + 3 files changed, 87 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/drivers/net/wan/hdlc_x25.c b/drivers/net/wan/hdlc_x25.c index 5643675ff724..688233c2e1ea 100644 --- a/drivers/net/wan/hdlc_x25.c +++ b/drivers/net/wan/hdlc_x25.c @@ -21,8 +21,17 @@ #include #include +struct x25_state { + x25_hdlc_proto settings; +}; + static int x25_ioctl(struct net_device *dev, struct ifreq *ifr); +static struct x25_state *state(hdlc_device *hdlc) +{ + return hdlc->state; +} + /* These functions are callbacks called by LAPB layer */ static void x25_connect_disconnect(struct net_device *dev, int reason, int code) @@ -131,7 +140,6 @@ static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev) static int x25_open(struct net_device *dev) { - int result; static const struct lapb_register_struct cb = { .connect_confirmation = x25_connected, .connect_indication = x25_connected, @@ -140,10 +148,33 @@ static int x25_open(struct net_device *dev) .data_indication = x25_data_indication, .data_transmit = x25_data_transmit, }; + hdlc_device *hdlc = dev_to_hdlc(dev); + struct lapb_parms_struct params; + int result; result = lapb_register(dev, &cb); if (result != LAPB_OK) return result; + + result = lapb_getparms(dev, ¶ms); + if (result != LAPB_OK) + return result; + + if (state(hdlc)->settings.dce) + params.mode = params.mode | LAPB_DCE; + + if (state(hdlc)->settings.modulo == 128) + params.mode = params.mode | LAPB_EXTENDED; + + params.window = state(hdlc)->settings.window; + params.t1 = state(hdlc)->settings.t1; + params.t2 = state(hdlc)->settings.t2; + params.n2 = state(hdlc)->settings.n2; + + result = lapb_setparms(dev, ¶ms); + if (result != LAPB_OK) + return result; + return 0; } @@ -186,7 +217,10 @@ static struct hdlc_proto proto = { static int x25_ioctl(struct net_device *dev, struct ifreq *ifr) { + x25_hdlc_proto __user *x25_s = ifr->ifr_settings.ifs_ifsu.x25; + const size_t size = sizeof(x25_hdlc_proto); hdlc_device *hdlc = dev_to_hdlc(dev); + x25_hdlc_proto new_settings; int result; switch (ifr->ifr_settings.type) { @@ -194,7 +228,13 @@ static int x25_ioctl(struct net_device *dev, struct ifreq *ifr) if (dev_to_hdlc(dev)->proto != &proto) return -EINVAL; ifr->ifr_settings.type = IF_PROTO_X25; - return 0; /* return protocol only, no settable parameters */ + if (ifr->ifr_settings.size < size) { + ifr->ifr_settings.size = size; /* data size wanted */ + return -ENOBUFS; + } + if (copy_to_user(x25_s, &state(hdlc)->settings, size)) + return -EFAULT; + return 0; case IF_PROTO_X25: if (!capable(CAP_NET_ADMIN)) @@ -203,12 +243,46 @@ static int x25_ioctl(struct net_device *dev, struct ifreq *ifr) if (dev->flags & IFF_UP) return -EBUSY; + /* backward compatibility */ + if (ifr->ifr_settings.size = 0) { + new_settings.dce = 0; + new_settings.modulo = 8; + new_settings.window = 7; + new_settings.t1 = 3; + new_settings.t2 = 1; + new_settings.n2 = 10; + } + else { + if (copy_from_user(&new_settings, x25_s, size)) + return -EFAULT; + + if ((new_settings.dce != 0 && + new_settings.dce != 1) || + (new_settings.modulo != 8 && + new_settings.modulo != 128) || + new_settings.window < 1 || + (new_settings.modulo == 8 && + new_settings.window > 7) || + (new_settings.modulo == 128 && + new_settings.window > 127) || + new_settings.t1 < 1 || + new_settings.t1 > 255 || + new_settings.t2 < 1 || + new_settings.t2 > 255 || + new_settings.n2 < 1 || + new_settings.n2 > 255) + return -EINVAL; + } + result=hdlc->attach(dev, ENCODING_NRZ,PARITY_CRC16_PR1_CCITT); if (result) return result; - if ((result = attach_hdlc_protocol(dev, &proto, 0))) + if ((result = attach_hdlc_protocol(dev, &proto, + sizeof(struct x25_state)))) return result; + + memcpy(&state(hdlc)->settings, &new_settings, size); dev->type = ARPHRD_X25; call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev); netif_dormant_off(dev); diff --git a/include/uapi/linux/hdlc/ioctl.h b/include/uapi/linux/hdlc/ioctl.h index 0fe4238e8246..b06341acab5e 100644 --- a/include/uapi/linux/hdlc/ioctl.h +++ b/include/uapi/linux/hdlc/ioctl.h @@ -79,6 +79,15 @@ typedef struct { unsigned int timeout; } cisco_proto; +typedef struct { + unsigned short dce; /* 1 for DCE (network side) operation */ + unsigned int modulo; /* modulo (8 = basic / 128 = extended) */ + unsigned int window; /* frame window size */ + unsigned int t1; /* timeout t1 */ + unsigned int t2; /* timeout t2 */ + unsigned int n2; /* frame retry counter */ +} x25_hdlc_proto; + /* PPP doesn't need any info now - supply length = 0 to ioctl */ #endif /* __ASSEMBLY__ */ diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h index 4bf33344aab1..be714cd8c826 100644 --- a/include/uapi/linux/if.h +++ b/include/uapi/linux/if.h @@ -213,6 +213,7 @@ struct if_settings { fr_proto __user *fr; fr_proto_pvc __user *fr_pvc; fr_proto_pvc_info __user *fr_pvc_info; + x25_hdlc_proto __user *x25; /* interface settings */ sync_serial_settings __user *sync; -- cgit v1.2.3-71-gd317 From 43a825afc91e2b06af1e8e7422198e759c2c5e20 Mon Sep 17 00:00:00 2001 From: Björn Töpel Date: Mon, 20 Jan 2020 10:29:17 +0100 Subject: xsk, net: Make sock_def_readable() have external linkage MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit XDP sockets use the default implementation of struct sock's sk_data_ready callback, which is sock_def_readable(). This function is called in the XDP socket fast-path, and involves a retpoline. By letting sock_def_readable() have external linkage, and being called directly, the retpoline can be avoided. Signed-off-by: Björn Töpel Signed-off-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20200120092917.13949-1-bjorn.topel@gmail.com --- include/net/sock.h | 2 ++ net/core/sock.c | 2 +- net/xdp/xsk.c | 2 +- 3 files changed, 4 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/net/sock.h b/include/net/sock.h index 8dff68b4c316..0891c55f1e82 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2612,4 +2612,6 @@ static inline bool sk_dev_equal_l3scope(struct sock *sk, int dif) return false; } +void sock_def_readable(struct sock *sk); + #endif /* _SOCK_H */ diff --git a/net/core/sock.c b/net/core/sock.c index 8459ad579f73..a4c8fac781ff 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2786,7 +2786,7 @@ static void sock_def_error_report(struct sock *sk) rcu_read_unlock(); } -static void sock_def_readable(struct sock *sk) +void sock_def_readable(struct sock *sk) { struct socket_wq *wq; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 02ada7ab8c6e..df600487a68d 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -217,7 +217,7 @@ static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) static void xsk_flush(struct xdp_sock *xs) { xskq_prod_submit(xs->rx); - xs->sk.sk_data_ready(&xs->sk); + sock_def_readable(&xs->sk); } int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) -- cgit v1.2.3-71-gd317 From be8704ff07d2374bcc5c675526f95e70c6459683 Mon Sep 17 00:00:00 2001 From: Alexei Starovoitov Date: Mon, 20 Jan 2020 16:53:46 -0800 Subject: bpf: Introduce dynamic program extensions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Introduce dynamic program extensions. The users can load additional BPF functions and replace global functions in previously loaded BPF programs while these programs are executing. Global functions are verified individually by the verifier based on their types only. Hence the global function in the new program which types match older function can safely replace that corresponding function. This new function/program is called 'an extension' of old program. At load time the verifier uses (attach_prog_fd, attach_btf_id) pair to identify the function to be replaced. The BPF program type is derived from the target program into extension program. Technically bpf_verifier_ops is copied from target program. The BPF_PROG_TYPE_EXT program type is a placeholder. It has empty verifier_ops. The extension program can call the same bpf helper functions as target program. Single BPF_PROG_TYPE_EXT type is used to extend XDP, SKB and all other program types. The verifier allows only one level of replacement. Meaning that the extension program cannot recursively extend an extension. That also means that the maximum stack size is increasing from 512 to 1024 bytes and maximum function nesting level from 8 to 16. The programs don't always consume that much. The stack usage is determined by the number of on-stack variables used by the program. The verifier could have enforced 512 limit for combined original plus extension program, but it makes for difficult user experience. The main use case for extensions is to provide generic mechanism to plug external programs into policy program or function call chaining. BPF trampoline is used to track both fentry/fexit and program extensions because both are using the same nop slot at the beginning of every BPF function. Attaching fentry/fexit to a function that was replaced is not allowed. The opposite is true as well. Replacing a function that currently being analyzed with fentry/fexit is not allowed. The executable page allocated by BPF trampoline is not used by program extensions. This inefficiency will be optimized in future patches. Function by function verification of global function supports scalars and pointer to context only. Hence program extensions are supported for such class of global functions only. In the future the verifier will be extended with support to pointers to structures, arrays with sizes, etc. Signed-off-by: Alexei Starovoitov Signed-off-by: Daniel Borkmann Acked-by: John Fastabend Acked-by: Andrii Nakryiko Acked-by: Toke Høiland-Jørgensen Link: https://lore.kernel.org/bpf/20200121005348.2769920-2-ast@kernel.org --- include/linux/bpf.h | 10 ++- include/linux/bpf_types.h | 2 + include/linux/btf.h | 5 ++ include/uapi/linux/bpf.h | 1 + kernel/bpf/btf.c | 152 +++++++++++++++++++++++++++++++++++++++++++++- kernel/bpf/syscall.c | 15 ++++- kernel/bpf/trampoline.c | 41 ++++++++++++- kernel/bpf/verifier.c | 85 ++++++++++++++++++++------ 8 files changed, 283 insertions(+), 28 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 8e3b8f4ad183..05d16615054c 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -465,7 +465,8 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); enum bpf_tramp_prog_type { BPF_TRAMP_FENTRY, BPF_TRAMP_FEXIT, - BPF_TRAMP_MAX + BPF_TRAMP_MAX, + BPF_TRAMP_REPLACE, /* more than MAX */ }; struct bpf_trampoline { @@ -480,6 +481,11 @@ struct bpf_trampoline { void *addr; bool ftrace_managed; } func; + /* if !NULL this is BPF_PROG_TYPE_EXT program that extends another BPF + * program by replacing one of its functions. func.addr is the address + * of the function it replaced. + */ + struct bpf_prog *extension_prog; /* list of BPF programs using this trampoline */ struct hlist_head progs_hlist[BPF_TRAMP_MAX]; /* Number of attached programs. A counter per kind. */ @@ -1107,6 +1113,8 @@ int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *regs); int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *reg); +int btf_check_type_match(struct bpf_verifier_env *env, struct bpf_prog *prog, + struct btf *btf, const struct btf_type *t); struct bpf_prog *bpf_prog_by_id(u32 id); diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index 9f326e6ef885..c81d4ece79a4 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -68,6 +68,8 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_SK_REUSEPORT, sk_reuseport, #if defined(CONFIG_BPF_JIT) BPF_PROG_TYPE(BPF_PROG_TYPE_STRUCT_OPS, bpf_struct_ops, void *, void *) +BPF_PROG_TYPE(BPF_PROG_TYPE_EXT, bpf_extension, + void *, void *) #endif BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops) diff --git a/include/linux/btf.h b/include/linux/btf.h index 881e9b76ef49..5c1ea99b480f 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -107,6 +107,11 @@ static inline u16 btf_type_vlen(const struct btf_type *t) return BTF_INFO_VLEN(t->info); } +static inline u16 btf_func_linkage(const struct btf_type *t) +{ + return BTF_INFO_VLEN(t->info); +} + static inline bool btf_type_kflag(const struct btf_type *t) { return BTF_INFO_KFLAG(t->info); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 033d90a2282d..e81628eb059c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -180,6 +180,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_CGROUP_SOCKOPT, BPF_PROG_TYPE_TRACING, BPF_PROG_TYPE_STRUCT_OPS, + BPF_PROG_TYPE_EXT, }; enum bpf_attach_type { diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 832b5d7fd892..32963b6d5a9c 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -276,6 +276,11 @@ static const char * const btf_kind_str[NR_BTF_KINDS] = { [BTF_KIND_DATASEC] = "DATASEC", }; +static const char *btf_type_str(const struct btf_type *t) +{ + return btf_kind_str[BTF_INFO_KIND(t->info)]; +} + struct btf_kind_operations { s32 (*check_meta)(struct btf_verifier_env *env, const struct btf_type *t, @@ -4115,6 +4120,148 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, return 0; } +/* Compare BTFs of two functions assuming only scalars and pointers to context. + * t1 points to BTF_KIND_FUNC in btf1 + * t2 points to BTF_KIND_FUNC in btf2 + * Returns: + * EINVAL - function prototype mismatch + * EFAULT - verifier bug + * 0 - 99% match. The last 1% is validated by the verifier. + */ +int btf_check_func_type_match(struct bpf_verifier_log *log, + struct btf *btf1, const struct btf_type *t1, + struct btf *btf2, const struct btf_type *t2) +{ + const struct btf_param *args1, *args2; + const char *fn1, *fn2, *s1, *s2; + u32 nargs1, nargs2, i; + + fn1 = btf_name_by_offset(btf1, t1->name_off); + fn2 = btf_name_by_offset(btf2, t2->name_off); + + if (btf_func_linkage(t1) != BTF_FUNC_GLOBAL) { + bpf_log(log, "%s() is not a global function\n", fn1); + return -EINVAL; + } + if (btf_func_linkage(t2) != BTF_FUNC_GLOBAL) { + bpf_log(log, "%s() is not a global function\n", fn2); + return -EINVAL; + } + + t1 = btf_type_by_id(btf1, t1->type); + if (!t1 || !btf_type_is_func_proto(t1)) + return -EFAULT; + t2 = btf_type_by_id(btf2, t2->type); + if (!t2 || !btf_type_is_func_proto(t2)) + return -EFAULT; + + args1 = (const struct btf_param *)(t1 + 1); + nargs1 = btf_type_vlen(t1); + args2 = (const struct btf_param *)(t2 + 1); + nargs2 = btf_type_vlen(t2); + + if (nargs1 != nargs2) { + bpf_log(log, "%s() has %d args while %s() has %d args\n", + fn1, nargs1, fn2, nargs2); + return -EINVAL; + } + + t1 = btf_type_skip_modifiers(btf1, t1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, t2->type, NULL); + if (t1->info != t2->info) { + bpf_log(log, + "Return type %s of %s() doesn't match type %s of %s()\n", + btf_type_str(t1), fn1, + btf_type_str(t2), fn2); + return -EINVAL; + } + + for (i = 0; i < nargs1; i++) { + t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL); + t2 = btf_type_skip_modifiers(btf2, args2[i].type, NULL); + + if (t1->info != t2->info) { + bpf_log(log, "arg%d in %s() is %s while %s() has %s\n", + i, fn1, btf_type_str(t1), + fn2, btf_type_str(t2)); + return -EINVAL; + } + if (btf_type_has_size(t1) && t1->size != t2->size) { + bpf_log(log, + "arg%d in %s() has size %d while %s() has %d\n", + i, fn1, t1->size, + fn2, t2->size); + return -EINVAL; + } + + /* global functions are validated with scalars and pointers + * to context only. And only global functions can be replaced. + * Hence type check only those types. + */ + if (btf_type_is_int(t1) || btf_type_is_enum(t1)) + continue; + if (!btf_type_is_ptr(t1)) { + bpf_log(log, + "arg%d in %s() has unrecognized type\n", + i, fn1); + return -EINVAL; + } + t1 = btf_type_skip_modifiers(btf1, t1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, t2->type, NULL); + if (!btf_type_is_struct(t1)) { + bpf_log(log, + "arg%d in %s() is not a pointer to context\n", + i, fn1); + return -EINVAL; + } + if (!btf_type_is_struct(t2)) { + bpf_log(log, + "arg%d in %s() is not a pointer to context\n", + i, fn2); + return -EINVAL; + } + /* This is an optional check to make program writing easier. + * Compare names of structs and report an error to the user. + * btf_prepare_func_args() already checked that t2 struct + * is a context type. btf_prepare_func_args() will check + * later that t1 struct is a context type as well. + */ + s1 = btf_name_by_offset(btf1, t1->name_off); + s2 = btf_name_by_offset(btf2, t2->name_off); + if (strcmp(s1, s2)) { + bpf_log(log, + "arg%d %s(struct %s *) doesn't match %s(struct %s *)\n", + i, fn1, s1, fn2, s2); + return -EINVAL; + } + } + return 0; +} + +/* Compare BTFs of given program with BTF of target program */ +int btf_check_type_match(struct bpf_verifier_env *env, struct bpf_prog *prog, + struct btf *btf2, const struct btf_type *t2) +{ + struct btf *btf1 = prog->aux->btf; + const struct btf_type *t1; + u32 btf_id = 0; + + if (!prog->aux->func_info) { + bpf_log(&env->log, "Program extension requires BTF\n"); + return -EINVAL; + } + + btf_id = prog->aux->func_info[0].type_id; + if (!btf_id) + return -EFAULT; + + t1 = btf_type_by_id(btf1, btf_id); + if (!t1 || !btf_type_is_func(t1)) + return -EFAULT; + + return btf_check_func_type_match(&env->log, btf1, t1, btf2, t2); +} + /* Compare BTF of a function with given bpf_reg_state. * Returns: * EFAULT - there is a verifier bug. Abort verification. @@ -4224,6 +4371,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, { struct bpf_verifier_log *log = &env->log; struct bpf_prog *prog = env->prog; + enum bpf_prog_type prog_type = prog->type; struct btf *btf = prog->aux->btf; const struct btf_param *args; const struct btf_type *t; @@ -4261,6 +4409,8 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, bpf_log(log, "Verifier bug in function %s()\n", tname); return -EFAULT; } + if (prog_type == BPF_PROG_TYPE_EXT) + prog_type = prog->aux->linked_prog->type; t = btf_type_by_id(btf, t->type); if (!t || !btf_type_is_func_proto(t)) { @@ -4296,7 +4446,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, continue; } if (btf_type_is_ptr(t) && - btf_get_prog_ctx_type(log, btf, t, prog->type, i)) { + btf_get_prog_ctx_type(log, btf, t, prog_type, i)) { reg[i + 1].type = PTR_TO_CTX; continue; } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 9a840c57f6df..a91ad518c050 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1932,13 +1932,15 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, switch (prog_type) { case BPF_PROG_TYPE_TRACING: case BPF_PROG_TYPE_STRUCT_OPS: + case BPF_PROG_TYPE_EXT: break; default: return -EINVAL; } } - if (prog_fd && prog_type != BPF_PROG_TYPE_TRACING) + if (prog_fd && prog_type != BPF_PROG_TYPE_TRACING && + prog_type != BPF_PROG_TYPE_EXT) return -EINVAL; switch (prog_type) { @@ -1981,6 +1983,10 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, default: return -EINVAL; } + case BPF_PROG_TYPE_EXT: + if (expected_attach_type) + return -EINVAL; + /* fallthrough */ default: return 0; } @@ -2183,7 +2189,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog) int tr_fd, err; if (prog->expected_attach_type != BPF_TRACE_FENTRY && - prog->expected_attach_type != BPF_TRACE_FEXIT) { + prog->expected_attach_type != BPF_TRACE_FEXIT && + prog->type != BPF_PROG_TYPE_EXT) { err = -EINVAL; goto out_put_prog; } @@ -2250,12 +2257,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr) if (prog->type != BPF_PROG_TYPE_RAW_TRACEPOINT && prog->type != BPF_PROG_TYPE_TRACING && + prog->type != BPF_PROG_TYPE_EXT && prog->type != BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE) { err = -EINVAL; goto out_put_prog; } - if (prog->type == BPF_PROG_TYPE_TRACING) { + if (prog->type == BPF_PROG_TYPE_TRACING || + prog->type == BPF_PROG_TYPE_EXT) { if (attr->raw_tracepoint.name) { /* The attach point for this category of programs * should be specified via btf_id during program load. diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 7657ede7aee2..eb64c245052b 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -5,6 +5,12 @@ #include #include +/* dummy _ops. The verifier will operate on target program's ops. */ +const struct bpf_verifier_ops bpf_extension_verifier_ops = { +}; +const struct bpf_prog_ops bpf_extension_prog_ops = { +}; + /* btf_vmlinux has ~22k attachable functions. 1k htab is enough. */ #define TRAMPOLINE_HASH_BITS 10 #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS) @@ -194,8 +200,10 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(enum bpf_attach_type t) switch (t) { case BPF_TRACE_FENTRY: return BPF_TRAMP_FENTRY; - default: + case BPF_TRACE_FEXIT: return BPF_TRAMP_FEXIT; + default: + return BPF_TRAMP_REPLACE; } } @@ -204,12 +212,31 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog) enum bpf_tramp_prog_type kind; struct bpf_trampoline *tr; int err = 0; + int cnt; tr = prog->aux->trampoline; kind = bpf_attach_type_to_tramp(prog->expected_attach_type); mutex_lock(&tr->mutex); - if (tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT] - >= BPF_MAX_TRAMP_PROGS) { + if (tr->extension_prog) { + /* cannot attach fentry/fexit if extension prog is attached. + * cannot overwrite extension prog either. + */ + err = -EBUSY; + goto out; + } + cnt = tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT]; + if (kind == BPF_TRAMP_REPLACE) { + /* Cannot attach extension if fentry/fexit are in use. */ + if (cnt) { + err = -EBUSY; + goto out; + } + tr->extension_prog = prog; + err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL, + prog->bpf_func); + goto out; + } + if (cnt >= BPF_MAX_TRAMP_PROGS) { err = -E2BIG; goto out; } @@ -240,9 +267,17 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog) tr = prog->aux->trampoline; kind = bpf_attach_type_to_tramp(prog->expected_attach_type); mutex_lock(&tr->mutex); + if (kind == BPF_TRAMP_REPLACE) { + WARN_ON_ONCE(!tr->extension_prog); + err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, + tr->extension_prog->bpf_func, NULL); + tr->extension_prog = NULL; + goto out; + } hlist_del(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; err = bpf_trampoline_update(prog->aux->trampoline); +out: mutex_unlock(&tr->mutex); return err; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6defbec9eb62..9fe94f64f0ec 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9564,7 +9564,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) subprog); regs = state->frame[state->curframe]->regs; - if (subprog) { + if (subprog || env->prog->type == BPF_PROG_TYPE_EXT) { ret = btf_prepare_func_args(env, subprog, regs); if (ret) goto out; @@ -9742,6 +9742,7 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env) static int check_attach_btf_id(struct bpf_verifier_env *env) { struct bpf_prog *prog = env->prog; + bool prog_extension = prog->type == BPF_PROG_TYPE_EXT; struct bpf_prog *tgt_prog = prog->aux->linked_prog; u32 btf_id = prog->aux->attach_btf_id; const char prefix[] = "btf_trace_"; @@ -9757,7 +9758,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) return check_struct_ops_btf_id(env); - if (prog->type != BPF_PROG_TYPE_TRACING) + if (prog->type != BPF_PROG_TYPE_TRACING && !prog_extension) return 0; if (!btf_id) { @@ -9793,8 +9794,59 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) return -EINVAL; } conservative = aux->func_info_aux[subprog].unreliable; + if (prog_extension) { + if (conservative) { + verbose(env, + "Cannot replace static functions\n"); + return -EINVAL; + } + if (!prog->jit_requested) { + verbose(env, + "Extension programs should be JITed\n"); + return -EINVAL; + } + env->ops = bpf_verifier_ops[tgt_prog->type]; + } + if (!tgt_prog->jited) { + verbose(env, "Can attach to only JITed progs\n"); + return -EINVAL; + } + if (tgt_prog->type == prog->type) { + /* Cannot fentry/fexit another fentry/fexit program. + * Cannot attach program extension to another extension. + * It's ok to attach fentry/fexit to extension program. + */ + verbose(env, "Cannot recursively attach\n"); + return -EINVAL; + } + if (tgt_prog->type == BPF_PROG_TYPE_TRACING && + prog_extension && + (tgt_prog->expected_attach_type == BPF_TRACE_FENTRY || + tgt_prog->expected_attach_type == BPF_TRACE_FEXIT)) { + /* Program extensions can extend all program types + * except fentry/fexit. The reason is the following. + * The fentry/fexit programs are used for performance + * analysis, stats and can be attached to any program + * type except themselves. When extension program is + * replacing XDP function it is necessary to allow + * performance analysis of all functions. Both original + * XDP program and its program extension. Hence + * attaching fentry/fexit to BPF_PROG_TYPE_EXT is + * allowed. If extending of fentry/fexit was allowed it + * would be possible to create long call chain + * fentry->extension->fentry->extension beyond + * reasonable stack size. Hence extending fentry is not + * allowed. + */ + verbose(env, "Cannot extend fentry/fexit\n"); + return -EINVAL; + } key = ((u64)aux->id) << 32 | btf_id; } else { + if (prog_extension) { + verbose(env, "Cannot replace kernel functions\n"); + return -EINVAL; + } key = btf_id; } @@ -9832,6 +9884,10 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) prog->aux->attach_func_proto = t; prog->aux->attach_btf_trace = true; return 0; + default: + if (!prog_extension) + return -EINVAL; + /* fallthrough */ case BPF_TRACE_FENTRY: case BPF_TRACE_FEXIT: if (!btf_type_is_func(t)) { @@ -9839,6 +9895,9 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) btf_id); return -EINVAL; } + if (prog_extension && + btf_check_type_match(env, prog, btf, t)) + return -EINVAL; t = btf_type_by_id(btf, t->type); if (!btf_type_is_func_proto(t)) return -EINVAL; @@ -9862,18 +9921,6 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) if (ret < 0) goto out; if (tgt_prog) { - if (!tgt_prog->jited) { - /* for now */ - verbose(env, "Can trace only JITed BPF progs\n"); - ret = -EINVAL; - goto out; - } - if (tgt_prog->type == BPF_PROG_TYPE_TRACING) { - /* prevent cycles */ - verbose(env, "Cannot recursively attach\n"); - ret = -EINVAL; - goto out; - } if (subprog == 0) addr = (long) tgt_prog->bpf_func; else @@ -9895,8 +9942,6 @@ out: if (ret) bpf_trampoline_put(tr); return ret; - default: - return -EINVAL; } } @@ -9966,10 +10011,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, goto skip_full_check; } - ret = check_attach_btf_id(env); - if (ret) - goto skip_full_check; - env->strict_alignment = !!(attr->prog_flags & BPF_F_STRICT_ALIGNMENT); if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) env->strict_alignment = true; @@ -10006,6 +10047,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret < 0) goto skip_full_check; + ret = check_attach_btf_id(env); + if (ret) + goto skip_full_check; + ret = check_cfg(env); if (ret < 0) goto skip_full_check; -- cgit v1.2.3-71-gd317 From 5576b991e9c1a11d2cc21c4b94fc75ec27603896 Mon Sep 17 00:00:00 2001 From: Martin KaFai Lau Date: Wed, 22 Jan 2020 15:36:46 -0800 Subject: bpf: Add BPF_FUNC_jiffies64 This patch adds a helper to read the 64bit jiffies. It will be used in a later patch to implement the bpf_cubic.c. The helper is inlined for jit_requested and 64 BITS_PER_LONG as the map_gen_lookup(). Other cases could be considered together with map_gen_lookup() if needed. Signed-off-by: Martin KaFai Lau Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200122233646.903260-1-kafai@fb.com --- include/linux/bpf.h | 1 + include/uapi/linux/bpf.h | 9 ++++++++- kernel/bpf/core.c | 1 + kernel/bpf/helpers.c | 12 ++++++++++++ kernel/bpf/verifier.c | 24 ++++++++++++++++++++++++ net/core/filter.c | 2 ++ 6 files changed, 48 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 05d16615054c..a9687861fd7e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1414,6 +1414,7 @@ extern const struct bpf_func_proto bpf_get_local_storage_proto; extern const struct bpf_func_proto bpf_strtol_proto; extern const struct bpf_func_proto bpf_strtoul_proto; extern const struct bpf_func_proto bpf_tcp_sock_proto; +extern const struct bpf_func_proto bpf_jiffies64_proto; /* Shared helpers among cBPF and eBPF. */ void bpf_user_rnd_init_once(void); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index e81628eb059c..f1d74a2bd234 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2886,6 +2886,12 @@ union bpf_attr { * **-EPERM** if no permission to send the *sig*. * * **-EAGAIN** if bpf program can try again. + * + * u64 bpf_jiffies64(void) + * Description + * Obtain the 64bit jiffies + * Return + * The 64 bit jiffies */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3005,7 +3011,8 @@ union bpf_attr { FN(probe_read_user_str), \ FN(probe_read_kernel_str), \ FN(tcp_send_ack), \ - FN(send_signal_thread), + FN(send_signal_thread), \ + FN(jiffies64), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 29d47aae0dd1..973a20d49749 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2137,6 +2137,7 @@ const struct bpf_func_proto bpf_map_pop_elem_proto __weak; const struct bpf_func_proto bpf_map_peek_elem_proto __weak; const struct bpf_func_proto bpf_spin_lock_proto __weak; const struct bpf_func_proto bpf_spin_unlock_proto __weak; +const struct bpf_func_proto bpf_jiffies64_proto __weak; const struct bpf_func_proto bpf_get_prandom_u32_proto __weak; const struct bpf_func_proto bpf_get_smp_processor_id_proto __weak; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index cada974c9f4e..d8b7b110a1c5 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "../../lib/kstrtox.h" @@ -312,6 +313,17 @@ void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, preempt_enable(); } +BPF_CALL_0(bpf_jiffies64) +{ + return get_jiffies_64(); +} + +const struct bpf_func_proto bpf_jiffies64_proto = { + .func = bpf_jiffies64, + .gpl_only = false, + .ret_type = RET_INTEGER, +}; + #ifdef CONFIG_CGROUPS BPF_CALL_0(bpf_get_current_cgroup_id) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9fe94f64f0ec..734abaa02123 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9445,6 +9445,30 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) goto patch_call_imm; } + if (prog->jit_requested && BITS_PER_LONG == 64 && + insn->imm == BPF_FUNC_jiffies64) { + struct bpf_insn ld_jiffies_addr[2] = { + BPF_LD_IMM64(BPF_REG_0, + (unsigned long)&jiffies), + }; + + insn_buf[0] = ld_jiffies_addr[0]; + insn_buf[1] = ld_jiffies_addr[1]; + insn_buf[2] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, + BPF_REG_0, 0); + cnt = 3; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, + cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } + patch_call_imm: fn = env->ops->get_func_proto(insn->imm, env->prog); /* all functions that have prototype and verifier allowed diff --git a/net/core/filter.c b/net/core/filter.c index 17de6747d9e3..4bf3e4aa8a7a 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5923,6 +5923,8 @@ bpf_base_func_proto(enum bpf_func_id func_id) return &bpf_spin_unlock_proto; case BPF_FUNC_trace_printk: return bpf_get_trace_printk_proto(); + case BPF_FUNC_jiffies64: + return &bpf_jiffies64_proto; default: return NULL; } -- cgit v1.2.3-71-gd317 From 84bf557fb02f5924c109a21a160ffc353d878487 Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:24 +0530 Subject: net: sched: pie: move common code to pie.h This patch moves macros, structures and small functions common to PIE and FQ-PIE (to be added in a future commit) from the file net/sched/sch_pie.c to the header file include/net/pie.h. All the moved functions are made inline. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 96 +++++++++++++++++++++++++++++++++++++++++++++++++++++ net/sched/sch_pie.c | 86 +---------------------------------------------- 2 files changed, 97 insertions(+), 85 deletions(-) create mode 100644 include/net/pie.h (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h new file mode 100644 index 000000000000..440213ec83eb --- /dev/null +++ b/include/net/pie.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __NET_SCHED_PIE_H +#define __NET_SCHED_PIE_H + +#include +#include +#include +#include +#include + +#define QUEUE_THRESHOLD 16384 +#define DQCOUNT_INVALID -1 +#define DTIME_INVALID 0xffffffffffffffff +#define MAX_PROB 0xffffffffffffffff +#define PIE_SCALE 8 + +/* parameters used */ +struct pie_params { + psched_time_t target; /* user specified target delay in pschedtime */ + u32 tupdate; /* timer frequency (in jiffies) */ + u32 limit; /* number of packets that can be enqueued */ + u32 alpha; /* alpha and beta are between 0 and 32 */ + u32 beta; /* and are used for shift relative to 1 */ + bool ecn; /* true if ecn is enabled */ + bool bytemode; /* to scale drop early prob based on pkt size */ + u8 dq_rate_estimator; /* to calculate delay using Little's law */ +}; + +/* variables used */ +struct pie_vars { + u64 prob; /* probability but scaled by u64 limit. */ + psched_time_t burst_time; + psched_time_t qdelay; + psched_time_t qdelay_old; + u64 dq_count; /* measured in bytes */ + psched_time_t dq_tstamp; /* drain rate */ + u64 accu_prob; /* accumulated drop probability */ + u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ + u32 qlen_old; /* in bytes */ + u8 accu_prob_overflows; /* overflows of accu_prob */ +}; + +/* statistics gathering */ +struct pie_stats { + u32 packets_in; /* total number of packets enqueued */ + u32 dropped; /* packets dropped due to pie_action */ + u32 overlimit; /* dropped due to lack of space in queue */ + u32 maxq; /* maximum queue size */ + u32 ecn_mark; /* packets marked with ECN */ +}; + +/* private skb vars */ +struct pie_skb_cb { + psched_time_t enqueue_time; +}; + +static inline void pie_params_init(struct pie_params *params) +{ + params->alpha = 2; + params->beta = 20; + params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ + params->limit = 1000; /* default of 1000 packets */ + params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ + params->ecn = false; + params->bytemode = false; + params->dq_rate_estimator = false; +} + +static inline void pie_vars_init(struct pie_vars *vars) +{ + vars->dq_count = DQCOUNT_INVALID; + vars->dq_tstamp = DTIME_INVALID; + vars->accu_prob = 0; + vars->avg_dq_rate = 0; + /* default of 150 ms in pschedtime */ + vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); + vars->accu_prob_overflows = 0; +} + +static inline struct pie_skb_cb *get_pie_cb(const struct sk_buff *skb) +{ + qdisc_cb_private_validate(skb, sizeof(struct pie_skb_cb)); + return (struct pie_skb_cb *)qdisc_skb_cb(skb)->data; +} + +static inline psched_time_t pie_get_enqueue_time(const struct sk_buff *skb) +{ + return get_pie_cb(skb)->enqueue_time; +} + +static inline void pie_set_enqueue_time(struct sk_buff *skb) +{ + get_pie_cb(skb)->enqueue_time = psched_get_time(); +} + +#endif diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c index b0b0dc46af61..7197bcaa14ba 100644 --- a/net/sched/sch_pie.c +++ b/net/sched/sch_pie.c @@ -19,47 +19,7 @@ #include #include #include - -#define QUEUE_THRESHOLD 16384 -#define DQCOUNT_INVALID -1 -#define DTIME_INVALID 0xffffffffffffffff -#define MAX_PROB 0xffffffffffffffff -#define PIE_SCALE 8 - -/* parameters used */ -struct pie_params { - psched_time_t target; /* user specified target delay in pschedtime */ - u32 tupdate; /* timer frequency (in jiffies) */ - u32 limit; /* number of packets that can be enqueued */ - u32 alpha; /* alpha and beta are between 0 and 32 */ - u32 beta; /* and are used for shift relative to 1 */ - bool ecn; /* true if ecn is enabled */ - bool bytemode; /* to scale drop early prob based on pkt size */ - u8 dq_rate_estimator; /* to calculate delay using Little's law */ -}; - -/* variables used */ -struct pie_vars { - u64 prob; /* probability but scaled by u64 limit. */ - psched_time_t burst_time; - psched_time_t qdelay; - psched_time_t qdelay_old; - u64 dq_count; /* measured in bytes */ - psched_time_t dq_tstamp; /* drain rate */ - u64 accu_prob; /* accumulated drop probability */ - u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ - u32 qlen_old; /* in bytes */ - u8 accu_prob_overflows; /* overflows of accu_prob */ -}; - -/* statistics gathering */ -struct pie_stats { - u32 packets_in; /* total number of packets enqueued */ - u32 dropped; /* packets dropped due to pie_action */ - u32 overlimit; /* dropped due to lack of space in queue */ - u32 maxq; /* maximum queue size */ - u32 ecn_mark; /* packets marked with ECN */ -}; +#include /* private data for the Qdisc */ struct pie_sched_data { @@ -70,50 +30,6 @@ struct pie_sched_data { struct Qdisc *sch; }; -static void pie_params_init(struct pie_params *params) -{ - params->alpha = 2; - params->beta = 20; - params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ - params->limit = 1000; /* default of 1000 packets */ - params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ - params->ecn = false; - params->bytemode = false; - params->dq_rate_estimator = false; -} - -/* private skb vars */ -struct pie_skb_cb { - psched_time_t enqueue_time; -}; - -static struct pie_skb_cb *get_pie_cb(const struct sk_buff *skb) -{ - qdisc_cb_private_validate(skb, sizeof(struct pie_skb_cb)); - return (struct pie_skb_cb *)qdisc_skb_cb(skb)->data; -} - -static psched_time_t pie_get_enqueue_time(const struct sk_buff *skb) -{ - return get_pie_cb(skb)->enqueue_time; -} - -static void pie_set_enqueue_time(struct sk_buff *skb) -{ - get_pie_cb(skb)->enqueue_time = psched_get_time(); -} - -static void pie_vars_init(struct pie_vars *vars) -{ - vars->dq_count = DQCOUNT_INVALID; - vars->dq_tstamp = DTIME_INVALID; - vars->accu_prob = 0; - vars->avg_dq_rate = 0; - /* default of 150 ms in pschedtime */ - vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); - vars->accu_prob_overflows = 0; -} - static bool drop_early(struct Qdisc *sch, u32 packet_size) { struct pie_sched_data *q = qdisc_priv(sch); -- cgit v1.2.3-71-gd317 From 805a5a23a4c4ee126e781bd14a1deb242395e817 Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:25 +0530 Subject: pie: use U64_MAX to denote (2^64 - 1) Use the U64_MAX macro to denote the constant (2^64 - 1). Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index 440213ec83eb..7ef375db5bab 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -10,8 +10,8 @@ #define QUEUE_THRESHOLD 16384 #define DQCOUNT_INVALID -1 -#define DTIME_INVALID 0xffffffffffffffff -#define MAX_PROB 0xffffffffffffffff +#define DTIME_INVALID U64_MAX +#define MAX_PROB U64_MAX #define PIE_SCALE 8 /* parameters used */ -- cgit v1.2.3-71-gd317 From cf4eeee5ff56180b525bfb6a204071216ca4000a Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:26 +0530 Subject: pie: rearrange macros in order of length Rearrange macros in order of length and align the values to improve readability. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index 7ef375db5bab..397c7abf0879 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -8,11 +8,11 @@ #include #include -#define QUEUE_THRESHOLD 16384 -#define DQCOUNT_INVALID -1 -#define DTIME_INVALID U64_MAX -#define MAX_PROB U64_MAX -#define PIE_SCALE 8 +#define MAX_PROB U64_MAX +#define DTIME_INVALID U64_MAX +#define QUEUE_THRESHOLD 16384 +#define DQCOUNT_INVALID -1 +#define PIE_SCALE 8 /* parameters used */ struct pie_params { -- cgit v1.2.3-71-gd317 From 1dbfc5e071db3f5acc3c7c87a564bf57b838cf49 Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:27 +0530 Subject: pie: use u8 instead of bool in pie_vars Linux best practice recommends using u8 for true/false values in structures. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index 397c7abf0879..f9c6a44bdb0c 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -21,8 +21,8 @@ struct pie_params { u32 limit; /* number of packets that can be enqueued */ u32 alpha; /* alpha and beta are between 0 and 32 */ u32 beta; /* and are used for shift relative to 1 */ - bool ecn; /* true if ecn is enabled */ - bool bytemode; /* to scale drop early prob based on pkt size */ + u8 ecn; /* true if ecn is enabled */ + u8 bytemode; /* to scale drop early prob based on pkt size */ u8 dq_rate_estimator; /* to calculate delay using Little's law */ }; -- cgit v1.2.3-71-gd317 From 2dfb1952a9a1fde0b515f58605c11902e69415bf Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:28 +0530 Subject: pie: rearrange structure members and their initializations Rearrange the members of the structure such that closely referenced members appear together and/or fit in the same cacheline. Also, change the order of their initializations to match the order in which they appear in the structure. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 20 ++++++++++---------- net/sched/sch_pie.c | 2 +- 2 files changed, 11 insertions(+), 11 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index f9c6a44bdb0c..ec0fbe98ec2f 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -28,13 +28,13 @@ struct pie_params { /* variables used */ struct pie_vars { - u64 prob; /* probability but scaled by u64 limit. */ - psched_time_t burst_time; psched_time_t qdelay; psched_time_t qdelay_old; - u64 dq_count; /* measured in bytes */ + psched_time_t burst_time; psched_time_t dq_tstamp; /* drain rate */ + u64 prob; /* probability but scaled by u64 limit. */ u64 accu_prob; /* accumulated drop probability */ + u64 dq_count; /* measured in bytes */ u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ u32 qlen_old; /* in bytes */ u8 accu_prob_overflows; /* overflows of accu_prob */ @@ -45,8 +45,8 @@ struct pie_stats { u32 packets_in; /* total number of packets enqueued */ u32 dropped; /* packets dropped due to pie_action */ u32 overlimit; /* dropped due to lack of space in queue */ - u32 maxq; /* maximum queue size */ u32 ecn_mark; /* packets marked with ECN */ + u32 maxq; /* maximum queue size */ }; /* private skb vars */ @@ -56,11 +56,11 @@ struct pie_skb_cb { static inline void pie_params_init(struct pie_params *params) { - params->alpha = 2; - params->beta = 20; + params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ params->limit = 1000; /* default of 1000 packets */ - params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ + params->alpha = 2; + params->beta = 20; params->ecn = false; params->bytemode = false; params->dq_rate_estimator = false; @@ -68,12 +68,12 @@ static inline void pie_params_init(struct pie_params *params) static inline void pie_vars_init(struct pie_vars *vars) { - vars->dq_count = DQCOUNT_INVALID; + /* default of 150 ms in pschedtime */ + vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); vars->dq_tstamp = DTIME_INVALID; vars->accu_prob = 0; + vars->dq_count = DQCOUNT_INVALID; vars->avg_dq_rate = 0; - /* default of 150 ms in pschedtime */ - vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); vars->accu_prob_overflows = 0; } diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c index 7197bcaa14ba..0c583cc148f3 100644 --- a/net/sched/sch_pie.c +++ b/net/sched/sch_pie.c @@ -23,8 +23,8 @@ /* private data for the Qdisc */ struct pie_sched_data { - struct pie_params params; struct pie_vars vars; + struct pie_params params; struct pie_stats stats; struct timer_list adapt_timer; struct Qdisc *sch; -- cgit v1.2.3-71-gd317 From b42a3d7c7cfff3555d7057c20dbbe57fe75d77c0 Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:29 +0530 Subject: pie: improve comments and commenting style Improve the comments along with the commenting style used to describe the members of the structures and their initial values in the init functions. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 85 +++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 58 insertions(+), 27 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index ec0fbe98ec2f..51a1984c2dce 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -14,42 +14,74 @@ #define DQCOUNT_INVALID -1 #define PIE_SCALE 8 -/* parameters used */ +/** + * struct pie_params - contains pie parameters + * @target: target delay in pschedtime + * @tudpate: interval at which drop probability is calculated + * @limit: total number of packets that can be in the queue + * @alpha: parameter to control drop probability + * @beta: parameter to control drop probability + * @ecn: is ECN marking of packets enabled + * @bytemode: is drop probability scaled based on pkt size + * @dq_rate_estimator: is Little's law used for qdelay calculation + */ struct pie_params { - psched_time_t target; /* user specified target delay in pschedtime */ - u32 tupdate; /* timer frequency (in jiffies) */ - u32 limit; /* number of packets that can be enqueued */ - u32 alpha; /* alpha and beta are between 0 and 32 */ - u32 beta; /* and are used for shift relative to 1 */ - u8 ecn; /* true if ecn is enabled */ - u8 bytemode; /* to scale drop early prob based on pkt size */ - u8 dq_rate_estimator; /* to calculate delay using Little's law */ + psched_time_t target; + u32 tupdate; + u32 limit; + u32 alpha; + u32 beta; + u8 ecn; + u8 bytemode; + u8 dq_rate_estimator; }; -/* variables used */ +/** + * struct pie_vars - contains pie variables + * @qdelay: current queue delay + * @qdelay_old: queue delay in previous qdelay calculation + * @burst_time: burst time allowance + * @dq_tstamp: timestamp at which dq rate was last calculated + * @prob: drop probability + * @accu_prob: accumulated drop probability + * @dq_count: number of bytes dequeued in a measurement cycle + * @avg_dq_rate: calculated average dq rate + * @qlen_old: queue length during previous qdelay calculation + * @accu_prob_overflows: number of times accu_prob overflows + */ struct pie_vars { psched_time_t qdelay; psched_time_t qdelay_old; psched_time_t burst_time; - psched_time_t dq_tstamp; /* drain rate */ - u64 prob; /* probability but scaled by u64 limit. */ - u64 accu_prob; /* accumulated drop probability */ - u64 dq_count; /* measured in bytes */ - u32 avg_dq_rate; /* bytes per pschedtime tick,scaled */ - u32 qlen_old; /* in bytes */ - u8 accu_prob_overflows; /* overflows of accu_prob */ + psched_time_t dq_tstamp; + u64 prob; + u64 accu_prob; + u64 dq_count; + u32 avg_dq_rate; + u32 qlen_old; + u8 accu_prob_overflows; }; -/* statistics gathering */ +/** + * struct pie_stats - contains pie stats + * @packets_in: total number of packets enqueued + * @dropped: packets dropped due to pie action + * @overlimit: packets dropped due to lack of space in queue + * @ecn_mark: packets marked with ECN + * @maxq: maximum queue size + */ struct pie_stats { - u32 packets_in; /* total number of packets enqueued */ - u32 dropped; /* packets dropped due to pie_action */ - u32 overlimit; /* dropped due to lack of space in queue */ - u32 ecn_mark; /* packets marked with ECN */ - u32 maxq; /* maximum queue size */ + u32 packets_in; + u32 dropped; + u32 overlimit; + u32 ecn_mark; + u32 maxq; }; -/* private skb vars */ +/** + * struct pie_skb_cb - contains private skb vars + * @enqueue_time: timestamp when the packet is enqueued + */ struct pie_skb_cb { psched_time_t enqueue_time; }; @@ -58,7 +90,7 @@ static inline void pie_params_init(struct pie_params *params) { params->target = PSCHED_NS2TICKS(15 * NSEC_PER_MSEC); /* 15 ms */ params->tupdate = usecs_to_jiffies(15 * USEC_PER_MSEC); /* 15 ms */ - params->limit = 1000; /* default of 1000 packets */ + params->limit = 1000; params->alpha = 2; params->beta = 20; params->ecn = false; @@ -68,8 +100,7 @@ static inline void pie_params_init(struct pie_params *params) static inline void pie_vars_init(struct pie_vars *vars) { - /* default of 150 ms in pschedtime */ - vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); + vars->burst_time = PSCHED_NS2TICKS(150 * NSEC_PER_MSEC); /* 150 ms */ vars->dq_tstamp = DTIME_INVALID; vars->accu_prob = 0; vars->dq_count = DQCOUNT_INVALID; -- cgit v1.2.3-71-gd317 From 5205ea00cda1ac23cebfb97dfccca84722d58dfe Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:32 +0530 Subject: net: sched: pie: export symbols to be reused by FQ-PIE This patch makes the drop_early(), calculate_probability() and pie_process_dequeue() functions generic enough to be used by both PIE and FQ-PIE (to be added in a future commit). The major change here is in the way the functions take in arguments. This patch exports these functions and makes FQ-PIE dependent on sch_pie. Signed-off-by: Mohit P. Tahiliani Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 9 +++ net/sched/sch_pie.c | 173 ++++++++++++++++++++++++++-------------------------- 2 files changed, 97 insertions(+), 85 deletions(-) (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index 51a1984c2dce..90f5db3d29e7 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -124,4 +124,13 @@ static inline void pie_set_enqueue_time(struct sk_buff *skb) get_pie_cb(skb)->enqueue_time = psched_get_time(); } +bool pie_drop_early(struct Qdisc *sch, struct pie_params *params, + struct pie_vars *vars, u32 qlen, u32 packet_size); + +void pie_process_dequeue(struct sk_buff *skb, struct pie_params *params, + struct pie_vars *vars, u32 qlen); + +void pie_calculate_probability(struct pie_params *params, struct pie_vars *vars, + u32 qlen); + #endif diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c index c65164659bca..915bcdb59a9f 100644 --- a/net/sched/sch_pie.c +++ b/net/sched/sch_pie.c @@ -30,64 +30,65 @@ struct pie_sched_data { struct Qdisc *sch; }; -static bool drop_early(struct Qdisc *sch, u32 packet_size) +bool pie_drop_early(struct Qdisc *sch, struct pie_params *params, + struct pie_vars *vars, u32 qlen, u32 packet_size) { - struct pie_sched_data *q = qdisc_priv(sch); u64 rnd; - u64 local_prob = q->vars.prob; + u64 local_prob = vars->prob; u32 mtu = psched_mtu(qdisc_dev(sch)); /* If there is still burst allowance left skip random early drop */ - if (q->vars.burst_time > 0) + if (vars->burst_time > 0) return false; /* If current delay is less than half of target, and * if drop prob is low already, disable early_drop */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.prob < MAX_PROB / 5)) + if ((vars->qdelay < params->target / 2) && + (vars->prob < MAX_PROB / 5)) return false; - /* If we have fewer than 2 mtu-sized packets, disable drop_early, + /* If we have fewer than 2 mtu-sized packets, disable pie_drop_early, * similar to min_th in RED */ - if (sch->qstats.backlog < 2 * mtu) + if (qlen < 2 * mtu) return false; /* If bytemode is turned on, use packet size to compute new * probablity. Smaller packets will have lower drop prob in this case */ - if (q->params.bytemode && packet_size <= mtu) + if (params->bytemode && packet_size <= mtu) local_prob = (u64)packet_size * div_u64(local_prob, mtu); else - local_prob = q->vars.prob; + local_prob = vars->prob; if (local_prob == 0) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; } - if (local_prob > MAX_PROB - q->vars.accu_prob) - q->vars.accu_prob_overflows++; + if (local_prob > MAX_PROB - vars->accu_prob) + vars->accu_prob_overflows++; - q->vars.accu_prob += local_prob; + vars->accu_prob += local_prob; - if (q->vars.accu_prob_overflows == 0 && - q->vars.accu_prob < (MAX_PROB / 100) * 85) + if (vars->accu_prob_overflows == 0 && + vars->accu_prob < (MAX_PROB / 100) * 85) return false; - if (q->vars.accu_prob_overflows == 8 && - q->vars.accu_prob >= MAX_PROB / 2) + if (vars->accu_prob_overflows == 8 && + vars->accu_prob >= MAX_PROB / 2) return true; prandom_bytes(&rnd, 8); if (rnd < local_prob) { - q->vars.accu_prob = 0; - q->vars.accu_prob_overflows = 0; + vars->accu_prob = 0; + vars->accu_prob_overflows = 0; return true; } return false; } +EXPORT_SYMBOL_GPL(pie_drop_early); static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) @@ -100,7 +101,8 @@ static int pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto out; } - if (!drop_early(sch, skb->len)) { + if (!pie_drop_early(sch, &q->params, &q->vars, sch->qstats.backlog, + skb->len)) { enqueue = true; } else if (q->params.ecn && (q->vars.prob <= MAX_PROB / 10) && INET_ECN_set_ce(skb)) { @@ -212,26 +214,25 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt, return 0; } -static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) +void pie_process_dequeue(struct sk_buff *skb, struct pie_params *params, + struct pie_vars *vars, u32 qlen) { - struct pie_sched_data *q = qdisc_priv(sch); - int qlen = sch->qstats.backlog; /* current queue size in bytes */ psched_time_t now = psched_get_time(); u32 dtime = 0; /* If dq_rate_estimator is disabled, calculate qdelay using the * packet timestamp. */ - if (!q->params.dq_rate_estimator) { - q->vars.qdelay = now - pie_get_enqueue_time(skb); + if (!params->dq_rate_estimator) { + vars->qdelay = now - pie_get_enqueue_time(skb); - if (q->vars.dq_tstamp != DTIME_INVALID) - dtime = now - q->vars.dq_tstamp; + if (vars->dq_tstamp != DTIME_INVALID) + dtime = now - vars->dq_tstamp; - q->vars.dq_tstamp = now; + vars->dq_tstamp = now; if (qlen == 0) - q->vars.qdelay = 0; + vars->qdelay = 0; if (dtime == 0) return; @@ -243,9 +244,9 @@ static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) * we have enough packets to calculate the drain rate. Save * current time as dq_tstamp and start measurement cycle. */ - if (qlen >= QUEUE_THRESHOLD && q->vars.dq_count == DQCOUNT_INVALID) { - q->vars.dq_tstamp = psched_get_time(); - q->vars.dq_count = 0; + if (qlen >= QUEUE_THRESHOLD && vars->dq_count == DQCOUNT_INVALID) { + vars->dq_tstamp = psched_get_time(); + vars->dq_count = 0; } /* Calculate the average drain rate from this value. If queue length @@ -257,25 +258,25 @@ static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) * in bytes, time difference in psched_time, hence rate is in * bytes/psched_time. */ - if (q->vars.dq_count != DQCOUNT_INVALID) { - q->vars.dq_count += skb->len; + if (vars->dq_count != DQCOUNT_INVALID) { + vars->dq_count += skb->len; - if (q->vars.dq_count >= QUEUE_THRESHOLD) { - u32 count = q->vars.dq_count << PIE_SCALE; + if (vars->dq_count >= QUEUE_THRESHOLD) { + u32 count = vars->dq_count << PIE_SCALE; - dtime = now - q->vars.dq_tstamp; + dtime = now - vars->dq_tstamp; if (dtime == 0) return; count = count / dtime; - if (q->vars.avg_dq_rate == 0) - q->vars.avg_dq_rate = count; + if (vars->avg_dq_rate == 0) + vars->avg_dq_rate = count; else - q->vars.avg_dq_rate = - (q->vars.avg_dq_rate - - (q->vars.avg_dq_rate >> 3)) + (count >> 3); + vars->avg_dq_rate = + (vars->avg_dq_rate - + (vars->avg_dq_rate >> 3)) + (count >> 3); /* If the queue has receded below the threshold, we hold * on to the last drain rate calculated, else we reset @@ -283,10 +284,10 @@ static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) * packet is dequeued */ if (qlen < QUEUE_THRESHOLD) { - q->vars.dq_count = DQCOUNT_INVALID; + vars->dq_count = DQCOUNT_INVALID; } else { - q->vars.dq_count = 0; - q->vars.dq_tstamp = psched_get_time(); + vars->dq_count = 0; + vars->dq_tstamp = psched_get_time(); } goto burst_allowance_reduction; @@ -296,18 +297,18 @@ static void pie_process_dequeue(struct Qdisc *sch, struct sk_buff *skb) return; burst_allowance_reduction: - if (q->vars.burst_time > 0) { - if (q->vars.burst_time > dtime) - q->vars.burst_time -= dtime; + if (vars->burst_time > 0) { + if (vars->burst_time > dtime) + vars->burst_time -= dtime; else - q->vars.burst_time = 0; + vars->burst_time = 0; } } +EXPORT_SYMBOL_GPL(pie_process_dequeue); -static void calculate_probability(struct Qdisc *sch) +void pie_calculate_probability(struct pie_params *params, struct pie_vars *vars, + u32 qlen) { - struct pie_sched_data *q = qdisc_priv(sch); - u32 qlen = sch->qstats.backlog; /* queue size in bytes */ psched_time_t qdelay = 0; /* in pschedtime */ psched_time_t qdelay_old = 0; /* in pschedtime */ s64 delta = 0; /* determines the change in probability */ @@ -316,17 +317,17 @@ static void calculate_probability(struct Qdisc *sch) u32 power; bool update_prob = true; - if (q->params.dq_rate_estimator) { - qdelay_old = q->vars.qdelay; - q->vars.qdelay_old = q->vars.qdelay; + if (params->dq_rate_estimator) { + qdelay_old = vars->qdelay; + vars->qdelay_old = vars->qdelay; - if (q->vars.avg_dq_rate > 0) - qdelay = (qlen << PIE_SCALE) / q->vars.avg_dq_rate; + if (vars->avg_dq_rate > 0) + qdelay = (qlen << PIE_SCALE) / vars->avg_dq_rate; else qdelay = 0; } else { - qdelay = q->vars.qdelay; - qdelay_old = q->vars.qdelay_old; + qdelay = vars->qdelay; + qdelay_old = vars->qdelay_old; } /* If qdelay is zero and qlen is not, it means qlen is very small, @@ -342,18 +343,18 @@ static void calculate_probability(struct Qdisc *sch) * probability. alpha/beta are updated locally below by scaling down * by 16 to come to 0-2 range. */ - alpha = ((u64)q->params.alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; - beta = ((u64)q->params.beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + alpha = ((u64)params->alpha * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; + beta = ((u64)params->beta * (MAX_PROB / PSCHED_TICKS_PER_SEC)) >> 4; /* We scale alpha and beta differently depending on how heavy the * congestion is. Please see RFC 8033 for details. */ - if (q->vars.prob < MAX_PROB / 10) { + if (vars->prob < MAX_PROB / 10) { alpha >>= 1; beta >>= 1; power = 100; - while (q->vars.prob < div_u64(MAX_PROB, power) && + while (vars->prob < div_u64(MAX_PROB, power) && power <= 1000000) { alpha >>= 2; beta >>= 2; @@ -362,14 +363,14 @@ static void calculate_probability(struct Qdisc *sch) } /* alpha and beta should be between 0 and 32, in multiples of 1/16 */ - delta += alpha * (u64)(qdelay - q->params.target); + delta += alpha * (u64)(qdelay - params->target); delta += beta * (u64)(qdelay - qdelay_old); - oldprob = q->vars.prob; + oldprob = vars->prob; /* to ensure we increase probability in steps of no more than 2% */ if (delta > (s64)(MAX_PROB / (100 / 2)) && - q->vars.prob >= MAX_PROB / 10) + vars->prob >= MAX_PROB / 10) delta = (MAX_PROB / 100) * 2; /* Non-linear drop: @@ -380,12 +381,12 @@ static void calculate_probability(struct Qdisc *sch) if (qdelay > (PSCHED_NS2TICKS(250 * NSEC_PER_MSEC))) delta += MAX_PROB / (100 / 2); - q->vars.prob += delta; + vars->prob += delta; if (delta > 0) { /* prevent overflow */ - if (q->vars.prob < oldprob) { - q->vars.prob = MAX_PROB; + if (vars->prob < oldprob) { + vars->prob = MAX_PROB; /* Prevent normalization error. If probability is at * maximum value already, we normalize it here, and * skip the check to do a non-linear drop in the next @@ -395,8 +396,8 @@ static void calculate_probability(struct Qdisc *sch) } } else { /* prevent underflow */ - if (q->vars.prob > oldprob) - q->vars.prob = 0; + if (vars->prob > oldprob) + vars->prob = 0; } /* Non-linear drop in probability: Reduce drop probability quickly if @@ -405,10 +406,10 @@ static void calculate_probability(struct Qdisc *sch) if (qdelay == 0 && qdelay_old == 0 && update_prob) /* Reduce drop probability to 98.4% */ - q->vars.prob -= q->vars.prob / 64u; + vars->prob -= vars->prob / 64; - q->vars.qdelay = qdelay; - q->vars.qlen_old = qlen; + vars->qdelay = qdelay; + vars->qlen_old = qlen; /* We restart the measurement cycle if the following conditions are met * 1. If the delay has been low for 2 consecutive Tupdate periods @@ -416,16 +417,17 @@ static void calculate_probability(struct Qdisc *sch) * 3. If average dq_rate_estimator is enabled, we have atleast one * estimate for the avg_dq_rate ie., is a non-zero value */ - if ((q->vars.qdelay < q->params.target / 2) && - (q->vars.qdelay_old < q->params.target / 2) && - q->vars.prob == 0 && - (!q->params.dq_rate_estimator || q->vars.avg_dq_rate > 0)) { - pie_vars_init(&q->vars); + if ((vars->qdelay < params->target / 2) && + (vars->qdelay_old < params->target / 2) && + vars->prob == 0 && + (!params->dq_rate_estimator || vars->avg_dq_rate > 0)) { + pie_vars_init(vars); } - if (!q->params.dq_rate_estimator) - q->vars.qdelay_old = qdelay; + if (!params->dq_rate_estimator) + vars->qdelay_old = qdelay; } +EXPORT_SYMBOL_GPL(pie_calculate_probability); static void pie_timer(struct timer_list *t) { @@ -434,7 +436,7 @@ static void pie_timer(struct timer_list *t) spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); spin_lock(root_lock); - calculate_probability(sch); + pie_calculate_probability(&q->params, &q->vars, sch->qstats.backlog); /* reset the timer to fire after 'tupdate'. tupdate is in jiffies. */ if (q->params.tupdate) @@ -523,12 +525,13 @@ static int pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) static struct sk_buff *pie_qdisc_dequeue(struct Qdisc *sch) { + struct pie_sched_data *q = qdisc_priv(sch); struct sk_buff *skb = qdisc_dequeue_head(sch); if (!skb) return NULL; - pie_process_dequeue(sch, skb); + pie_process_dequeue(skb, &q->params, &q->vars, sch->qstats.backlog); return skb; } -- cgit v1.2.3-71-gd317 From ec97ecf1ebe485a17cd8395a5f35e6b80b57665a Mon Sep 17 00:00:00 2001 From: "Mohit P. Tahiliani" Date: Wed, 22 Jan 2020 23:52:33 +0530 Subject: net: sched: add Flow Queue PIE packet scheduler Principles: - Packets are classified on flows. - This is a Stochastic model (as we use a hash, several flows might be hashed to the same slot) - Each flow has a PIE managed queue. - Flows are linked onto two (Round Robin) lists, so that new flows have priority on old ones. - For a given flow, packets are not reordered. - Drops during enqueue only. - ECN capability is off by default. - ECN threshold (if ECN is enabled) is at 10% by default. - Uses timestamps to calculate queue delay by default. Usage: tc qdisc ... fq_pie [ limit PACKETS ] [ flows NUMBER ] [ target TIME ] [ tupdate TIME ] [ alpha NUMBER ] [ beta NUMBER ] [ quantum BYTES ] [ memory_limit BYTES ] [ ecnprob PERCENTAGE ] [ [no]ecn ] [ [no]bytemode ] [ [no_]dq_rate_estimator ] defaults: limit: 10240 packets, flows: 1024 target: 15 ms, tupdate: 15 ms (in jiffies) alpha: 1/8, beta : 5/4 quantum: device MTU, memory_limit: 32 Mb ecnprob: 10%, ecn: off bytemode: off, dq_rate_estimator: off Signed-off-by: Mohit P. Tahiliani Signed-off-by: Sachin D. Patil Signed-off-by: V. Saicharan Signed-off-by: Mohit Bhasi Signed-off-by: Leslie Monis Signed-off-by: Gautam Ramakrishnan Signed-off-by: David S. Miller --- include/net/pie.h | 2 + include/uapi/linux/pkt_sched.h | 31 +++ net/sched/Kconfig | 13 + net/sched/Makefile | 1 + net/sched/sch_fq_pie.c | 562 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 609 insertions(+) create mode 100644 net/sched/sch_fq_pie.c (limited to 'include') diff --git a/include/net/pie.h b/include/net/pie.h index 90f5db3d29e7..fd5a37cb7993 100644 --- a/include/net/pie.h +++ b/include/net/pie.h @@ -81,9 +81,11 @@ struct pie_stats { /** * struct pie_skb_cb - contains private skb vars * @enqueue_time: timestamp when the packet is enqueued + * @mem_usage: size of the skb during enqueue */ struct pie_skb_cb { psched_time_t enqueue_time; + u32 mem_usage; }; static inline void pie_params_init(struct pie_params *params) diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index bf5a5b1dfb0b..bbe791b24168 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -971,6 +971,37 @@ struct tc_pie_xstats { __u32 ecn_mark; /* packets marked with ecn*/ }; +/* FQ PIE */ +enum { + TCA_FQ_PIE_UNSPEC, + TCA_FQ_PIE_LIMIT, + TCA_FQ_PIE_FLOWS, + TCA_FQ_PIE_TARGET, + TCA_FQ_PIE_TUPDATE, + TCA_FQ_PIE_ALPHA, + TCA_FQ_PIE_BETA, + TCA_FQ_PIE_QUANTUM, + TCA_FQ_PIE_MEMORY_LIMIT, + TCA_FQ_PIE_ECN_PROB, + TCA_FQ_PIE_ECN, + TCA_FQ_PIE_BYTEMODE, + TCA_FQ_PIE_DQ_RATE_ESTIMATOR, + __TCA_FQ_PIE_MAX +}; +#define TCA_FQ_PIE_MAX (__TCA_FQ_PIE_MAX - 1) + +struct tc_fq_pie_xstats { + __u32 packets_in; /* total number of packets enqueued */ + __u32 dropped; /* packets dropped due to fq_pie_action */ + __u32 overlimit; /* dropped due to lack of space in queue */ + __u32 overmemory; /* dropped due to lack of memory in queue */ + __u32 ecn_mark; /* packets marked with ecn */ + __u32 new_flow_count; /* count of new flows created by packets */ + __u32 new_flows_len; /* count of flows in new list */ + __u32 old_flows_len; /* count of flows in old list */ + __u32 memory_usage; /* total memory across all queues */ +}; + /* CBS */ struct tc_cbs_qopt { __u8 offload; diff --git a/net/sched/Kconfig b/net/sched/Kconfig index b1e7ec726958..edde0e519438 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -366,6 +366,19 @@ config NET_SCH_PIE If unsure, say N. +config NET_SCH_FQ_PIE + depends on NET_SCH_PIE + tristate "Flow Queue Proportional Integral controller Enhanced (FQ-PIE)" + help + Say Y here if you want to use the Flow Queue Proportional Integral + controller Enhanced (FQ-PIE) packet scheduling algorithm. + For more information, please see https://tools.ietf.org/html/rfc8033 + + To compile this driver as a module, choose M here: the module + will be called sch_fq_pie. + + If unsure, say N. + config NET_SCH_INGRESS tristate "Ingress/classifier-action Qdisc" depends on NET_CLS_ACT diff --git a/net/sched/Makefile b/net/sched/Makefile index bc8856b865ff..31c367a6cd09 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -59,6 +59,7 @@ obj-$(CONFIG_NET_SCH_CAKE) += sch_cake.o obj-$(CONFIG_NET_SCH_FQ) += sch_fq.o obj-$(CONFIG_NET_SCH_HHF) += sch_hhf.o obj-$(CONFIG_NET_SCH_PIE) += sch_pie.o +obj-$(CONFIG_NET_SCH_FQ_PIE) += sch_fq_pie.o obj-$(CONFIG_NET_SCH_CBS) += sch_cbs.o obj-$(CONFIG_NET_SCH_ETF) += sch_etf.o obj-$(CONFIG_NET_SCH_TAPRIO) += sch_taprio.o diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c new file mode 100644 index 000000000000..bbd0dea6b6b9 --- /dev/null +++ b/net/sched/sch_fq_pie.c @@ -0,0 +1,562 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Flow Queue PIE discipline + * + * Copyright (C) 2019 Mohit P. Tahiliani + * Copyright (C) 2019 Sachin D. Patil + * Copyright (C) 2019 V. Saicharan + * Copyright (C) 2019 Mohit Bhasi + * Copyright (C) 2019 Leslie Monis + * Copyright (C) 2019 Gautam Ramakrishnan + */ + +#include +#include +#include +#include +#include + +/* Flow Queue PIE + * + * Principles: + * - Packets are classified on flows. + * - This is a Stochastic model (as we use a hash, several flows might + * be hashed to the same slot) + * - Each flow has a PIE managed queue. + * - Flows are linked onto two (Round Robin) lists, + * so that new flows have priority on old ones. + * - For a given flow, packets are not reordered. + * - Drops during enqueue only. + * - ECN capability is off by default. + * - ECN threshold (if ECN is enabled) is at 10% by default. + * - Uses timestamps to calculate queue delay by default. + */ + +/** + * struct fq_pie_flow - contains data for each flow + * @vars: pie vars associated with the flow + * @deficit: number of remaining byte credits + * @backlog: size of data in the flow + * @qlen: number of packets in the flow + * @flowchain: flowchain for the flow + * @head: first packet in the flow + * @tail: last packet in the flow + */ +struct fq_pie_flow { + struct pie_vars vars; + s32 deficit; + u32 backlog; + u32 qlen; + struct list_head flowchain; + struct sk_buff *head; + struct sk_buff *tail; +}; + +struct fq_pie_sched_data { + struct tcf_proto __rcu *filter_list; /* optional external classifier */ + struct tcf_block *block; + struct fq_pie_flow *flows; + struct Qdisc *sch; + struct list_head old_flows; + struct list_head new_flows; + struct pie_params p_params; + u32 ecn_prob; + u32 flows_cnt; + u32 quantum; + u32 memory_limit; + u32 new_flow_count; + u32 memory_usage; + u32 overmemory; + struct pie_stats stats; + struct timer_list adapt_timer; +}; + +static unsigned int fq_pie_hash(const struct fq_pie_sched_data *q, + struct sk_buff *skb) +{ + return reciprocal_scale(skb_get_hash(skb), q->flows_cnt); +} + +static unsigned int fq_pie_classify(struct sk_buff *skb, struct Qdisc *sch, + int *qerr) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct tcf_proto *filter; + struct tcf_result res; + int result; + + if (TC_H_MAJ(skb->priority) == sch->handle && + TC_H_MIN(skb->priority) > 0 && + TC_H_MIN(skb->priority) <= q->flows_cnt) + return TC_H_MIN(skb->priority); + + filter = rcu_dereference_bh(q->filter_list); + if (!filter) + return fq_pie_hash(q, skb) + 1; + + *qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS; + result = tcf_classify(skb, filter, &res, false); + if (result >= 0) { +#ifdef CONFIG_NET_CLS_ACT + switch (result) { + case TC_ACT_STOLEN: + case TC_ACT_QUEUED: + case TC_ACT_TRAP: + *qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN; + /* fall through */ + case TC_ACT_SHOT: + return 0; + } +#endif + if (TC_H_MIN(res.classid) <= q->flows_cnt) + return TC_H_MIN(res.classid); + } + return 0; +} + +/* add skb to flow queue (tail add) */ +static inline void flow_queue_add(struct fq_pie_flow *flow, + struct sk_buff *skb) +{ + if (!flow->head) + flow->head = skb; + else + flow->tail->next = skb; + flow->tail = skb; + skb->next = NULL; +} + +static int fq_pie_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, + struct sk_buff **to_free) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct fq_pie_flow *sel_flow; + int uninitialized_var(ret); + u8 memory_limited = false; + u8 enqueue = false; + u32 pkt_len; + u32 idx; + + /* Classifies packet into corresponding flow */ + idx = fq_pie_classify(skb, sch, &ret); + sel_flow = &q->flows[idx]; + + /* Checks whether adding a new packet would exceed memory limit */ + get_pie_cb(skb)->mem_usage = skb->truesize; + memory_limited = q->memory_usage > q->memory_limit + skb->truesize; + + /* Checks if the qdisc is full */ + if (unlikely(qdisc_qlen(sch) >= sch->limit)) { + q->stats.overlimit++; + goto out; + } else if (unlikely(memory_limited)) { + q->overmemory++; + } + + if (!pie_drop_early(sch, &q->p_params, &sel_flow->vars, + sel_flow->backlog, skb->len)) { + enqueue = true; + } else if (q->p_params.ecn && + sel_flow->vars.prob <= (MAX_PROB / 100) * q->ecn_prob && + INET_ECN_set_ce(skb)) { + /* If packet is ecn capable, mark it if drop probability + * is lower than the parameter ecn_prob, else drop it. + */ + q->stats.ecn_mark++; + enqueue = true; + } + if (enqueue) { + /* Set enqueue time only when dq_rate_estimator is disabled. */ + if (!q->p_params.dq_rate_estimator) + pie_set_enqueue_time(skb); + + pkt_len = qdisc_pkt_len(skb); + q->stats.packets_in++; + q->memory_usage += skb->truesize; + sch->qstats.backlog += pkt_len; + sch->q.qlen++; + flow_queue_add(sel_flow, skb); + if (list_empty(&sel_flow->flowchain)) { + list_add_tail(&sel_flow->flowchain, &q->new_flows); + q->new_flow_count++; + sel_flow->deficit = q->quantum; + sel_flow->qlen = 0; + sel_flow->backlog = 0; + } + sel_flow->qlen++; + sel_flow->backlog += pkt_len; + return NET_XMIT_SUCCESS; + } +out: + q->stats.dropped++; + sel_flow->vars.accu_prob = 0; + sel_flow->vars.accu_prob_overflows = 0; + __qdisc_drop(skb, to_free); + qdisc_qstats_drop(sch); + return NET_XMIT_CN; +} + +static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = { + [TCA_FQ_PIE_LIMIT] = {.type = NLA_U32}, + [TCA_FQ_PIE_FLOWS] = {.type = NLA_U32}, + [TCA_FQ_PIE_TARGET] = {.type = NLA_U32}, + [TCA_FQ_PIE_TUPDATE] = {.type = NLA_U32}, + [TCA_FQ_PIE_ALPHA] = {.type = NLA_U32}, + [TCA_FQ_PIE_BETA] = {.type = NLA_U32}, + [TCA_FQ_PIE_QUANTUM] = {.type = NLA_U32}, + [TCA_FQ_PIE_MEMORY_LIMIT] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN_PROB] = {.type = NLA_U32}, + [TCA_FQ_PIE_ECN] = {.type = NLA_U32}, + [TCA_FQ_PIE_BYTEMODE] = {.type = NLA_U32}, + [TCA_FQ_PIE_DQ_RATE_ESTIMATOR] = {.type = NLA_U32}, +}; + +static inline struct sk_buff *dequeue_head(struct fq_pie_flow *flow) +{ + struct sk_buff *skb = flow->head; + + flow->head = skb->next; + skb->next = NULL; + return skb; +} + +static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct sk_buff *skb = NULL; + struct fq_pie_flow *flow; + struct list_head *head; + u32 pkt_len; + +begin: + head = &q->new_flows; + if (list_empty(head)) { + head = &q->old_flows; + if (list_empty(head)) + return NULL; + } + + flow = list_first_entry(head, struct fq_pie_flow, flowchain); + /* Flow has exhausted all its credits */ + if (flow->deficit <= 0) { + flow->deficit += q->quantum; + list_move_tail(&flow->flowchain, &q->old_flows); + goto begin; + } + + if (flow->head) { + skb = dequeue_head(flow); + pkt_len = qdisc_pkt_len(skb); + sch->qstats.backlog -= pkt_len; + sch->q.qlen--; + qdisc_bstats_update(sch, skb); + } + + if (!skb) { + /* force a pass through old_flows to prevent starvation */ + if (head == &q->new_flows && !list_empty(&q->old_flows)) + list_move_tail(&flow->flowchain, &q->old_flows); + else + list_del_init(&flow->flowchain); + goto begin; + } + + flow->qlen--; + flow->deficit -= pkt_len; + flow->backlog -= pkt_len; + q->memory_usage -= get_pie_cb(skb)->mem_usage; + pie_process_dequeue(skb, &q->p_params, &flow->vars, flow->backlog); + return skb; +} + +static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_PIE_MAX + 1]; + unsigned int len_dropped = 0; + unsigned int num_dropped = 0; + int err; + + if (!opt) + return -EINVAL; + + err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, extack); + if (err < 0) + return err; + + sch_tree_lock(sch); + if (tb[TCA_FQ_PIE_LIMIT]) { + u32 limit = nla_get_u32(tb[TCA_FQ_PIE_LIMIT]); + + q->p_params.limit = limit; + sch->limit = limit; + } + if (tb[TCA_FQ_PIE_FLOWS]) { + if (q->flows) { + NL_SET_ERR_MSG_MOD(extack, + "Number of flows cannot be changed"); + goto flow_error; + } + q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]); + if (!q->flows_cnt || q->flows_cnt > 65536) { + NL_SET_ERR_MSG_MOD(extack, + "Number of flows must be < 65536"); + goto flow_error; + } + } + + /* convert from microseconds to pschedtime */ + if (tb[TCA_FQ_PIE_TARGET]) { + /* target is in us */ + u32 target = nla_get_u32(tb[TCA_FQ_PIE_TARGET]); + + /* convert to pschedtime */ + q->p_params.target = + PSCHED_NS2TICKS((u64)target * NSEC_PER_USEC); + } + + /* tupdate is in jiffies */ + if (tb[TCA_FQ_PIE_TUPDATE]) + q->p_params.tupdate = + usecs_to_jiffies(nla_get_u32(tb[TCA_FQ_PIE_TUPDATE])); + + if (tb[TCA_FQ_PIE_ALPHA]) + q->p_params.alpha = nla_get_u32(tb[TCA_FQ_PIE_ALPHA]); + + if (tb[TCA_FQ_PIE_BETA]) + q->p_params.beta = nla_get_u32(tb[TCA_FQ_PIE_BETA]); + + if (tb[TCA_FQ_PIE_QUANTUM]) + q->quantum = nla_get_u32(tb[TCA_FQ_PIE_QUANTUM]); + + if (tb[TCA_FQ_PIE_MEMORY_LIMIT]) + q->memory_limit = nla_get_u32(tb[TCA_FQ_PIE_MEMORY_LIMIT]); + + if (tb[TCA_FQ_PIE_ECN_PROB]) + q->ecn_prob = nla_get_u32(tb[TCA_FQ_PIE_ECN_PROB]); + + if (tb[TCA_FQ_PIE_ECN]) + q->p_params.ecn = nla_get_u32(tb[TCA_FQ_PIE_ECN]); + + if (tb[TCA_FQ_PIE_BYTEMODE]) + q->p_params.bytemode = nla_get_u32(tb[TCA_FQ_PIE_BYTEMODE]); + + if (tb[TCA_FQ_PIE_DQ_RATE_ESTIMATOR]) + q->p_params.dq_rate_estimator = + nla_get_u32(tb[TCA_FQ_PIE_DQ_RATE_ESTIMATOR]); + + /* Drop excess packets if new limit is lower */ + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = fq_pie_qdisc_dequeue(sch); + + kfree_skb(skb); + len_dropped += qdisc_pkt_len(skb); + num_dropped += 1; + } + qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped); + + sch_tree_unlock(sch); + return 0; + +flow_error: + sch_tree_unlock(sch); + return -EINVAL; +} + +static void fq_pie_timer(struct timer_list *t) +{ + struct fq_pie_sched_data *q = from_timer(q, t, adapt_timer); + struct Qdisc *sch = q->sch; + spinlock_t *root_lock; /* to lock qdisc for probability calculations */ + u16 idx; + + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + spin_lock(root_lock); + + for (idx = 0; idx < q->flows_cnt; idx++) + pie_calculate_probability(&q->p_params, &q->flows[idx].vars, + q->flows[idx].backlog); + + /* reset the timer to fire after 'tupdate' jiffies. */ + if (q->p_params.tupdate) + mod_timer(&q->adapt_timer, jiffies + q->p_params.tupdate); + + spin_unlock(root_lock); +} + +static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + int err; + u16 idx; + + pie_params_init(&q->p_params); + sch->limit = 10 * 1024; + q->p_params.limit = sch->limit; + q->quantum = psched_mtu(qdisc_dev(sch)); + q->sch = sch; + q->ecn_prob = 10; + q->flows_cnt = 1024; + q->memory_limit = SZ_32M; + + INIT_LIST_HEAD(&q->new_flows); + INIT_LIST_HEAD(&q->old_flows); + + if (opt) { + err = fq_pie_change(sch, opt, extack); + + if (err) + return err; + } + + err = tcf_block_get(&q->block, &q->filter_list, sch, extack); + if (err) + goto init_failure; + + q->flows = kvcalloc(q->flows_cnt, sizeof(struct fq_pie_flow), + GFP_KERNEL); + if (!q->flows) { + err = -ENOMEM; + goto init_failure; + } + for (idx = 0; idx < q->flows_cnt; idx++) { + struct fq_pie_flow *flow = q->flows + idx; + + INIT_LIST_HEAD(&flow->flowchain); + pie_vars_init(&flow->vars); + } + + timer_setup(&q->adapt_timer, fq_pie_timer, 0); + mod_timer(&q->adapt_timer, jiffies + HZ / 2); + + return 0; + +init_failure: + q->flows_cnt = 0; + + return err; +} + +static int fq_pie_dump(struct Qdisc *sch, struct sk_buff *skb) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct nlattr *opts; + + opts = nla_nest_start(skb, TCA_OPTIONS); + if (!opts) + return -EMSGSIZE; + + /* convert target from pschedtime to us */ + if (nla_put_u32(skb, TCA_FQ_PIE_LIMIT, sch->limit) || + nla_put_u32(skb, TCA_FQ_PIE_FLOWS, q->flows_cnt) || + nla_put_u32(skb, TCA_FQ_PIE_TARGET, + ((u32)PSCHED_TICKS2NS(q->p_params.target)) / + NSEC_PER_USEC) || + nla_put_u32(skb, TCA_FQ_PIE_TUPDATE, + jiffies_to_usecs(q->p_params.tupdate)) || + nla_put_u32(skb, TCA_FQ_PIE_ALPHA, q->p_params.alpha) || + nla_put_u32(skb, TCA_FQ_PIE_BETA, q->p_params.beta) || + nla_put_u32(skb, TCA_FQ_PIE_QUANTUM, q->quantum) || + nla_put_u32(skb, TCA_FQ_PIE_MEMORY_LIMIT, q->memory_limit) || + nla_put_u32(skb, TCA_FQ_PIE_ECN_PROB, q->ecn_prob) || + nla_put_u32(skb, TCA_FQ_PIE_ECN, q->p_params.ecn) || + nla_put_u32(skb, TCA_FQ_PIE_BYTEMODE, q->p_params.bytemode) || + nla_put_u32(skb, TCA_FQ_PIE_DQ_RATE_ESTIMATOR, + q->p_params.dq_rate_estimator)) + goto nla_put_failure; + + return nla_nest_end(skb, opts); + +nla_put_failure: + nla_nest_cancel(skb, opts); + return -EMSGSIZE; +} + +static int fq_pie_dump_stats(struct Qdisc *sch, struct gnet_dump *d) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct tc_fq_pie_xstats st = { + .packets_in = q->stats.packets_in, + .overlimit = q->stats.overlimit, + .overmemory = q->overmemory, + .dropped = q->stats.dropped, + .ecn_mark = q->stats.ecn_mark, + .new_flow_count = q->new_flow_count, + .memory_usage = q->memory_usage, + }; + struct list_head *pos; + + sch_tree_lock(sch); + list_for_each(pos, &q->new_flows) + st.new_flows_len++; + + list_for_each(pos, &q->old_flows) + st.old_flows_len++; + sch_tree_unlock(sch); + + return gnet_stats_copy_app(d, &st, sizeof(st)); +} + +static void fq_pie_reset(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + u16 idx; + + INIT_LIST_HEAD(&q->new_flows); + INIT_LIST_HEAD(&q->old_flows); + for (idx = 0; idx < q->flows_cnt; idx++) { + struct fq_pie_flow *flow = q->flows + idx; + + /* Removes all packets from flow */ + rtnl_kfree_skbs(flow->head, flow->tail); + flow->head = NULL; + + INIT_LIST_HEAD(&flow->flowchain); + pie_vars_init(&flow->vars); + } + + sch->q.qlen = 0; + sch->qstats.backlog = 0; +} + +static void fq_pie_destroy(struct Qdisc *sch) +{ + struct fq_pie_sched_data *q = qdisc_priv(sch); + + tcf_block_put(q->block); + del_timer_sync(&q->adapt_timer); + kvfree(q->flows); +} + +static struct Qdisc_ops fq_pie_qdisc_ops __read_mostly = { + .id = "fq_pie", + .priv_size = sizeof(struct fq_pie_sched_data), + .enqueue = fq_pie_qdisc_enqueue, + .dequeue = fq_pie_qdisc_dequeue, + .peek = qdisc_peek_dequeued, + .init = fq_pie_init, + .destroy = fq_pie_destroy, + .reset = fq_pie_reset, + .change = fq_pie_change, + .dump = fq_pie_dump, + .dump_stats = fq_pie_dump_stats, + .owner = THIS_MODULE, +}; + +static int __init fq_pie_module_init(void) +{ + return register_qdisc(&fq_pie_qdisc_ops); +} + +static void __exit fq_pie_module_exit(void) +{ + unregister_qdisc(&fq_pie_qdisc_ops); +} + +module_init(fq_pie_module_init); +module_exit(fq_pie_module_exit); + +MODULE_DESCRIPTION("Flow Queue Proportional Integral controller Enhanced (FQ-PIE)"); +MODULE_AUTHOR("Mohit P. Tahiliani"); +MODULE_LICENSE("GPL"); -- cgit v1.2.3-71-gd317 From a5d29ae226812ae620f58a0726525d8af9b1d751 Mon Sep 17 00:00:00 2001 From: Nikolay Aleksandrov Date: Fri, 24 Jan 2020 13:40:21 +0200 Subject: net: bridge: vlan: add basic option setting support This patch adds support for option modification of single vlans and ranges. It allows to only modify options, i.e. skip create/delete by using the BRIDGE_VLAN_INFO_ONLY_OPTS flag. When working with a range option changes we try to pack the notifications as much as possible. v2: do full port (all vlans) notification only when creating/deleting vlans for compatibility, rework the range detection when changing options, add more verbose extack errors and check if a vlan should be used (br_vlan_should_use checks) Signed-off-by: Nikolay Aleksandrov Signed-off-by: David S. Miller --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_private.h | 8 ++++ net/bridge/br_vlan.c | 41 ++++++++++++++++---- net/bridge/br_vlan_options.c | 87 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 130 insertions(+), 7 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index ac38f0b674b8..c8b87ad52c5b 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -130,6 +130,7 @@ enum { #define BRIDGE_VLAN_INFO_RANGE_BEGIN (1<<3) /* VLAN is start of vlan range */ #define BRIDGE_VLAN_INFO_RANGE_END (1<<4) /* VLAN is end of vlan range */ #define BRIDGE_VLAN_INFO_BRENTRY (1<<5) /* Global bridge VLAN entry */ +#define BRIDGE_VLAN_INFO_ONLY_OPTS (1<<6) /* Skip create/delete/flags */ struct bridge_vlan_info { __u16 flags; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 403df71d2cfa..084904ee22a8 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -976,6 +976,8 @@ void br_vlan_notify(const struct net_bridge *br, const struct net_bridge_port *p, u16 vid, u16 vid_range, int cmd); +bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr, + const struct net_bridge_vlan *range_end); static inline struct net_bridge_vlan_group *br_vlan_group( const struct net_bridge *br) @@ -1197,6 +1199,12 @@ bool br_vlan_opts_eq(const struct net_bridge_vlan *v1, const struct net_bridge_vlan *v2); bool br_vlan_opts_fill(struct sk_buff *skb, const struct net_bridge_vlan *v); size_t br_vlan_opts_nl_size(void); +int br_vlan_process_options(const struct net_bridge *br, + const struct net_bridge_port *p, + struct net_bridge_vlan *range_start, + struct net_bridge_vlan *range_end, + struct nlattr **tb, + struct netlink_ext_ack *extack); #endif struct nf_br_ops { diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 75ec3da92b0b..d747756eac63 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -1667,8 +1667,8 @@ out_kfree: } /* check if v_curr can enter a range ending in range_end */ -static bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr, - const struct net_bridge_vlan *range_end) +bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr, + const struct net_bridge_vlan *range_end) { return v_curr->vid - range_end->vid == 1 && range_end->flags == v_curr->flags && @@ -1824,11 +1824,11 @@ static int br_vlan_rtm_process_one(struct net_device *dev, { struct bridge_vlan_info *vinfo, vrange_end, *vinfo_last = NULL; struct nlattr *tb[BRIDGE_VLANDB_ENTRY_MAX + 1]; + bool changed = false, skip_processing = false; struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; int err = 0, cmdmap = 0; struct net_bridge *br; - bool changed = false; if (netif_is_bridge_master(dev)) { br = netdev_priv(dev); @@ -1882,16 +1882,43 @@ static int br_vlan_rtm_process_one(struct net_device *dev, switch (cmd) { case RTM_NEWVLAN: cmdmap = RTM_SETLINK; + skip_processing = !!(vinfo->flags & BRIDGE_VLAN_INFO_ONLY_OPTS); break; case RTM_DELVLAN: cmdmap = RTM_DELLINK; break; } - err = br_process_vlan_info(br, p, cmdmap, vinfo, &vinfo_last, &changed, - extack); - if (changed) - br_ifinfo_notify(cmdmap, br, p); + if (!skip_processing) { + struct bridge_vlan_info *tmp_last = vinfo_last; + + /* br_process_vlan_info may overwrite vinfo_last */ + err = br_process_vlan_info(br, p, cmdmap, vinfo, &tmp_last, + &changed, extack); + + /* notify first if anything changed */ + if (changed) + br_ifinfo_notify(cmdmap, br, p); + + if (err) + return err; + } + + /* deal with options */ + if (cmd == RTM_NEWVLAN) { + struct net_bridge_vlan *range_start, *range_end; + + if (vinfo_last) { + range_start = br_vlan_find(vg, vinfo_last->vid); + range_end = br_vlan_find(vg, vinfo->vid); + } else { + range_start = br_vlan_find(vg, vinfo->vid); + range_end = range_start; + } + + err = br_vlan_process_options(br, p, range_start, range_end, + tb, extack); + } return err; } diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index 55fcdc9c380c..27275ac3e42e 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -23,3 +23,90 @@ size_t br_vlan_opts_nl_size(void) { return 0; } + +static int br_vlan_process_one_opts(const struct net_bridge *br, + const struct net_bridge_port *p, + struct net_bridge_vlan_group *vg, + struct net_bridge_vlan *v, + struct nlattr **tb, + bool *changed, + struct netlink_ext_ack *extack) +{ + *changed = false; + return 0; +} + +int br_vlan_process_options(const struct net_bridge *br, + const struct net_bridge_port *p, + struct net_bridge_vlan *range_start, + struct net_bridge_vlan *range_end, + struct nlattr **tb, + struct netlink_ext_ack *extack) +{ + struct net_bridge_vlan *v, *curr_start = NULL, *curr_end = NULL; + struct net_bridge_vlan_group *vg; + int vid, err = 0; + u16 pvid; + + if (p) + vg = nbp_vlan_group(p); + else + vg = br_vlan_group(br); + + if (!range_start || !br_vlan_should_use(range_start)) { + NL_SET_ERR_MSG_MOD(extack, "Vlan range start doesn't exist, can't process options"); + return -ENOENT; + } + if (!range_end || !br_vlan_should_use(range_end)) { + NL_SET_ERR_MSG_MOD(extack, "Vlan range end doesn't exist, can't process options"); + return -ENOENT; + } + + pvid = br_get_pvid(vg); + for (vid = range_start->vid; vid <= range_end->vid; vid++) { + bool changed = false; + + v = br_vlan_find(vg, vid); + if (!v || !br_vlan_should_use(v)) { + NL_SET_ERR_MSG_MOD(extack, "Vlan in range doesn't exist, can't process options"); + err = -ENOENT; + break; + } + + err = br_vlan_process_one_opts(br, p, vg, v, tb, &changed, + extack); + if (err) + break; + + if (changed) { + /* vlan options changed, check for range */ + if (!curr_start) { + curr_start = v; + curr_end = v; + continue; + } + + if (v->vid == pvid || + !br_vlan_can_enter_range(v, curr_end)) { + br_vlan_notify(br, p, curr_start->vid, + curr_end->vid, RTM_NEWVLAN); + curr_start = v; + } + curr_end = v; + } else { + /* nothing changed and nothing to notify yet */ + if (!curr_start) + continue; + + br_vlan_notify(br, p, curr_start->vid, curr_end->vid, + RTM_NEWVLAN); + curr_start = NULL; + curr_end = NULL; + } + } + if (curr_start) + br_vlan_notify(br, p, curr_start->vid, curr_end->vid, + RTM_NEWVLAN); + + return err; +} -- cgit v1.2.3-71-gd317 From a580c76d534c7360ba68042b19cb255e8420e987 Mon Sep 17 00:00:00 2001 From: Nikolay Aleksandrov Date: Fri, 24 Jan 2020 13:40:22 +0200 Subject: net: bridge: vlan: add per-vlan state The first per-vlan option added is state, it is needed for EVPN and for per-vlan STP. The state allows to control the forwarding on per-vlan basis. The vlan state is considered only if the port state is forwarding in order to avoid conflicts and be consistent. br_allowed_egress is called only when the state is forwarding, but the ingress case is a bit more complicated due to the fact that we may have the transition between port:BR_STATE_FORWARDING -> vlan:BR_STATE_LEARNING which should still allow the bridge to learn from the packet after vlan filtering and it will be dropped after that. Also to optimize the pvid state check we keep a copy in the vlan group to avoid one lookup. The state members are modified with *_ONCE() to annotate the lockless access. Signed-off-by: Nikolay Aleksandrov Signed-off-by: David S. Miller --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_device.c | 3 ++- net/bridge/br_input.c | 7 ++++-- net/bridge/br_private.h | 43 ++++++++++++++++++++++++++++++++-- net/bridge/br_vlan.c | 47 ++++++++++++++++++++++++++++---------- net/bridge/br_vlan_options.c | 52 ++++++++++++++++++++++++++++++++++++++++-- 6 files changed, 134 insertions(+), 19 deletions(-) (limited to 'include') diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index c8b87ad52c5b..42f7ca38ad80 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -191,6 +191,7 @@ enum { BRIDGE_VLANDB_ENTRY_UNSPEC, BRIDGE_VLANDB_ENTRY_INFO, BRIDGE_VLANDB_ENTRY_RANGE, + BRIDGE_VLANDB_ENTRY_STATE, __BRIDGE_VLANDB_ENTRY_MAX, }; #define BRIDGE_VLANDB_ENTRY_MAX (__BRIDGE_VLANDB_ENTRY_MAX - 1) diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c index fb38add21b37..dc3d2c1dd9d5 100644 --- a/net/bridge/br_device.c +++ b/net/bridge/br_device.c @@ -32,6 +32,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) struct net_bridge_mdb_entry *mdst; struct pcpu_sw_netstats *brstats = this_cpu_ptr(br->stats); const struct nf_br_ops *nf_ops; + u8 state = BR_STATE_FORWARDING; const unsigned char *dest; struct ethhdr *eth; u16 vid = 0; @@ -56,7 +57,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) eth = eth_hdr(skb); skb_pull(skb, ETH_HLEN); - if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid)) + if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid, &state)) goto out; if (IS_ENABLED(CONFIG_INET) && diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 8944ceb47fe9..fcc260840028 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -76,11 +76,14 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb bool local_rcv, mcast_hit = false; struct net_bridge *br; u16 vid = 0; + u8 state; if (!p || p->state == BR_STATE_DISABLED) goto drop; - if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid)) + state = p->state; + if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid, + &state)) goto out; nbp_switchdev_frame_mark(p, skb); @@ -103,7 +106,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb } } - if (p->state == BR_STATE_LEARNING) + if (state == BR_STATE_LEARNING) goto drop; BR_INPUT_SKB_CB(skb)->brdev = br->dev; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 084904ee22a8..5153ffe79a01 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -113,6 +113,7 @@ enum { * @vid: VLAN id * @flags: bridge vlan flags * @priv_flags: private (in-kernel) bridge vlan flags + * @state: STP state (e.g. blocking, learning, forwarding) * @stats: per-cpu VLAN statistics * @br: if MASTER flag set, this points to a bridge struct * @port: if MASTER flag unset, this points to a port struct @@ -133,6 +134,7 @@ struct net_bridge_vlan { u16 vid; u16 flags; u16 priv_flags; + u8 state; struct br_vlan_stats __percpu *stats; union { struct net_bridge *br; @@ -157,6 +159,7 @@ struct net_bridge_vlan { * @vlan_list: sorted VLAN entry list * @num_vlans: number of total VLAN entries * @pvid: PVID VLAN id + * @pvid_state: PVID's STP state (e.g. forwarding, learning, blocking) * * IMPORTANT: Be careful when checking if there're VLAN entries using list * primitives because the bridge can have entries in its list which @@ -170,6 +173,7 @@ struct net_bridge_vlan_group { struct list_head vlan_list; u16 num_vlans; u16 pvid; + u8 pvid_state; }; /* bridge fdb flags */ @@ -935,7 +939,7 @@ static inline int br_multicast_igmp_type(const struct sk_buff *skb) #ifdef CONFIG_BRIDGE_VLAN_FILTERING bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid); + u16 *vid, u8 *state); bool br_allowed_egress(struct net_bridge_vlan_group *vg, const struct sk_buff *skb); bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid); @@ -1037,7 +1041,7 @@ static inline u16 br_vlan_flags(const struct net_bridge_vlan *v, u16 pvid) static inline bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid) + u16 *vid, u8 *state) { return true; } @@ -1205,6 +1209,41 @@ int br_vlan_process_options(const struct net_bridge *br, struct net_bridge_vlan *range_end, struct nlattr **tb, struct netlink_ext_ack *extack); + +/* vlan state manipulation helpers using *_ONCE to annotate lock-free access */ +static inline u8 br_vlan_get_state(const struct net_bridge_vlan *v) +{ + return READ_ONCE(v->state); +} + +static inline void br_vlan_set_state(struct net_bridge_vlan *v, u8 state) +{ + WRITE_ONCE(v->state, state); +} + +static inline u8 br_vlan_get_pvid_state(const struct net_bridge_vlan_group *vg) +{ + return READ_ONCE(vg->pvid_state); +} + +static inline void br_vlan_set_pvid_state(struct net_bridge_vlan_group *vg, + u8 state) +{ + WRITE_ONCE(vg->pvid_state, state); +} + +/* learn_allow is true at ingress and false at egress */ +static inline bool br_vlan_state_allowed(u8 state, bool learn_allow) +{ + switch (state) { + case BR_STATE_LEARNING: + return learn_allow; + case BR_STATE_FORWARDING: + return true; + default: + return false; + } +} #endif struct nf_br_ops { diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index d747756eac63..6b5deca08b89 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -34,13 +34,15 @@ static struct net_bridge_vlan *br_vlan_lookup(struct rhashtable *tbl, u16 vid) return rhashtable_lookup_fast(tbl, &vid, br_vlan_rht_params); } -static bool __vlan_add_pvid(struct net_bridge_vlan_group *vg, u16 vid) +static bool __vlan_add_pvid(struct net_bridge_vlan_group *vg, + const struct net_bridge_vlan *v) { - if (vg->pvid == vid) + if (vg->pvid == v->vid) return false; smp_wmb(); - vg->pvid = vid; + br_vlan_set_pvid_state(vg, v->state); + vg->pvid = v->vid; return true; } @@ -69,7 +71,7 @@ static bool __vlan_add_flags(struct net_bridge_vlan *v, u16 flags) vg = nbp_vlan_group(v->port); if (flags & BRIDGE_VLAN_INFO_PVID) - ret = __vlan_add_pvid(vg, v->vid); + ret = __vlan_add_pvid(vg, v); else ret = __vlan_delete_pvid(vg, v->vid); @@ -293,6 +295,9 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, vg->num_vlans++; } + /* set the state before publishing */ + v->state = BR_STATE_FORWARDING; + err = rhashtable_lookup_insert_fast(&vg->vlan_hash, &v->vnode, br_vlan_rht_params); if (err) @@ -466,7 +471,8 @@ out: /* Called under RCU */ static bool __allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, - struct sk_buff *skb, u16 *vid) + struct sk_buff *skb, u16 *vid, + u8 *state) { struct br_vlan_stats *stats; struct net_bridge_vlan *v; @@ -532,13 +538,25 @@ static bool __allowed_ingress(const struct net_bridge *br, skb->vlan_tci |= pvid; /* if stats are disabled we can avoid the lookup */ - if (!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) - return true; + if (!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) { + if (*state == BR_STATE_FORWARDING) { + *state = br_vlan_get_pvid_state(vg); + return br_vlan_state_allowed(*state, true); + } else { + return true; + } + } } v = br_vlan_find(vg, *vid); if (!v || !br_vlan_should_use(v)) goto drop; + if (*state == BR_STATE_FORWARDING) { + *state = br_vlan_get_state(v); + if (!br_vlan_state_allowed(*state, true)) + goto drop; + } + if (br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) { stats = this_cpu_ptr(v->stats); u64_stats_update_begin(&stats->syncp); @@ -556,7 +574,7 @@ drop: bool br_allowed_ingress(const struct net_bridge *br, struct net_bridge_vlan_group *vg, struct sk_buff *skb, - u16 *vid) + u16 *vid, u8 *state) { /* If VLAN filtering is disabled on the bridge, all packets are * permitted. @@ -566,7 +584,7 @@ bool br_allowed_ingress(const struct net_bridge *br, return true; } - return __allowed_ingress(br, vg, skb, vid); + return __allowed_ingress(br, vg, skb, vid, state); } /* Called under RCU. */ @@ -582,7 +600,8 @@ bool br_allowed_egress(struct net_bridge_vlan_group *vg, br_vlan_get_tag(skb, &vid); v = br_vlan_find(vg, vid); - if (v && br_vlan_should_use(v)) + if (v && br_vlan_should_use(v) && + br_vlan_state_allowed(br_vlan_get_state(v), false)) return true; return false; @@ -593,6 +612,7 @@ bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid) { struct net_bridge_vlan_group *vg; struct net_bridge *br = p->br; + struct net_bridge_vlan *v; /* If filtering was disabled at input, let it pass. */ if (!br_opt_get(br, BROPT_VLAN_ENABLED)) @@ -607,13 +627,15 @@ bool br_should_learn(struct net_bridge_port *p, struct sk_buff *skb, u16 *vid) if (!*vid) { *vid = br_get_pvid(vg); - if (!*vid) + if (!*vid || + !br_vlan_state_allowed(br_vlan_get_pvid_state(vg), true)) return false; return true; } - if (br_vlan_find(vg, *vid)) + v = br_vlan_find(vg, *vid); + if (v && br_vlan_state_allowed(br_vlan_get_state(v), true)) return true; return false; @@ -1816,6 +1838,7 @@ static const struct nla_policy br_vlan_db_policy[BRIDGE_VLANDB_ENTRY_MAX + 1] = [BRIDGE_VLANDB_ENTRY_INFO] = { .type = NLA_EXACT_LEN, .len = sizeof(struct bridge_vlan_info) }, [BRIDGE_VLANDB_ENTRY_RANGE] = { .type = NLA_U16 }, + [BRIDGE_VLANDB_ENTRY_STATE] = { .type = NLA_U8 }, }; static int br_vlan_rtm_process_one(struct net_device *dev, diff --git a/net/bridge/br_vlan_options.c b/net/bridge/br_vlan_options.c index 27275ac3e42e..cd2eb194eb98 100644 --- a/net/bridge/br_vlan_options.c +++ b/net/bridge/br_vlan_options.c @@ -11,16 +11,54 @@ bool br_vlan_opts_eq(const struct net_bridge_vlan *v1, const struct net_bridge_vlan *v2) { - return true; + return v1->state == v2->state; } bool br_vlan_opts_fill(struct sk_buff *skb, const struct net_bridge_vlan *v) { - return true; + return !nla_put_u8(skb, BRIDGE_VLANDB_ENTRY_STATE, + br_vlan_get_state(v)); } size_t br_vlan_opts_nl_size(void) { + return nla_total_size(sizeof(u8)); /* BRIDGE_VLANDB_ENTRY_STATE */ +} + +static int br_vlan_modify_state(struct net_bridge_vlan_group *vg, + struct net_bridge_vlan *v, + u8 state, + bool *changed, + struct netlink_ext_ack *extack) +{ + struct net_bridge *br; + + ASSERT_RTNL(); + + if (state > BR_STATE_BLOCKING) { + NL_SET_ERR_MSG_MOD(extack, "Invalid vlan state"); + return -EINVAL; + } + + if (br_vlan_is_brentry(v)) + br = v->br; + else + br = v->port->br; + + if (br->stp_enabled == BR_KERNEL_STP) { + NL_SET_ERR_MSG_MOD(extack, "Can't modify vlan state when using kernel STP"); + return -EBUSY; + } + + if (v->state == state) + return 0; + + if (v->vid == br_get_pvid(vg)) + br_vlan_set_pvid_state(vg, state); + + br_vlan_set_state(v, state); + *changed = true; + return 0; } @@ -32,7 +70,17 @@ static int br_vlan_process_one_opts(const struct net_bridge *br, bool *changed, struct netlink_ext_ack *extack) { + int err; + *changed = false; + if (tb[BRIDGE_VLANDB_ENTRY_STATE]) { + u8 state = nla_get_u8(tb[BRIDGE_VLANDB_ENTRY_STATE]); + + err = br_vlan_modify_state(vg, v, state, changed, extack); + if (err) + return err; + } + return 0; } -- cgit v1.2.3-71-gd317 From f870fa0b5768842cb4690c1c11f19f28b731ae6d Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Tue, 21 Jan 2020 16:56:15 -0800 Subject: mptcp: Add MPTCP socket stubs Implements the infrastructure for MPTCP sockets. MPTCP sockets open one in-kernel TCP socket per subflow. These subflow sockets are only managed by the MPTCP socket that owns them and are not visible from userspace. This commit allows a userspace program to open an MPTCP socket with: sock = socket(AF_INET, SOCK_STREAM, IPPROTO_MPTCP); The resulting socket is simply a wrapper around a single regular TCP socket, without any of the MPTCP protocol implemented over the wire. Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Co-developed-by: Peter Krystad Signed-off-by: Peter Krystad Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- MAINTAINERS | 1 + include/net/mptcp.h | 16 ++++++ net/Kconfig | 1 + net/Makefile | 1 + net/ipv4/tcp.c | 2 + net/ipv6/tcp_ipv6.c | 7 +++ net/mptcp/Kconfig | 16 ++++++ net/mptcp/Makefile | 4 ++ net/mptcp/protocol.c | 142 +++++++++++++++++++++++++++++++++++++++++++++++++++ net/mptcp/protocol.h | 22 ++++++++ 10 files changed, 212 insertions(+) create mode 100644 net/mptcp/Kconfig create mode 100644 net/mptcp/Makefile create mode 100644 net/mptcp/protocol.c create mode 100644 net/mptcp/protocol.h (limited to 'include') diff --git a/MAINTAINERS b/MAINTAINERS index 702382b89c37..4be9c1b50183 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11583,6 +11583,7 @@ W: https://github.com/multipath-tcp/mptcp_net-next/wiki B: https://github.com/multipath-tcp/mptcp_net-next/issues S: Maintained F: include/net/mptcp.h +F: net/mptcp/ NETWORKING [TCP] M: Eric Dumazet diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 0573ae75c3db..98ba22379117 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -28,6 +28,8 @@ struct mptcp_ext { #ifdef CONFIG_MPTCP +void mptcp_init(void); + /* move the skb extension owership, with the assumption that 'to' is * newly allocated */ @@ -70,6 +72,10 @@ static inline bool mptcp_skb_can_collapse(const struct sk_buff *to, #else +static inline void mptcp_init(void) +{ +} + static inline void mptcp_skb_ext_move(struct sk_buff *to, const struct sk_buff *from) { @@ -82,4 +88,14 @@ static inline bool mptcp_skb_can_collapse(const struct sk_buff *to, } #endif /* CONFIG_MPTCP */ + +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +int mptcpv6_init(void); +#elif IS_ENABLED(CONFIG_IPV6) +static inline int mptcpv6_init(void) +{ + return 0; +} +#endif + #endif /* __NET_MPTCP_H */ diff --git a/net/Kconfig b/net/Kconfig index 54916b7adb9b..b0937a700f01 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -91,6 +91,7 @@ if INET source "net/ipv4/Kconfig" source "net/ipv6/Kconfig" source "net/netlabel/Kconfig" +source "net/mptcp/Kconfig" endif # if INET diff --git a/net/Makefile b/net/Makefile index 848303d98d3d..07ea48160874 100644 --- a/net/Makefile +++ b/net/Makefile @@ -87,3 +87,4 @@ endif obj-$(CONFIG_QRTR) += qrtr/ obj-$(CONFIG_NET_NCSI) += ncsi/ obj-$(CONFIG_XDP_SOCKETS) += xdp/ +obj-$(CONFIG_MPTCP) += mptcp/ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6711a97de3ce..7dfb78caedaf 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -271,6 +271,7 @@ #include #include #include +#include #include #include #include @@ -4021,4 +4022,5 @@ void __init tcp_init(void) tcp_metrics_init(); BUG_ON(tcp_register_congestion_control(&tcp_reno) != 0); tcp_tasklet_init(); + mptcp_init(); } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 5b5260103b65..60068ffde1d9 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -2163,9 +2163,16 @@ int __init tcpv6_init(void) ret = register_pernet_subsys(&tcpv6_net_ops); if (ret) goto out_tcpv6_protosw; + + ret = mptcpv6_init(); + if (ret) + goto out_tcpv6_pernet_subsys; + out: return ret; +out_tcpv6_pernet_subsys: + unregister_pernet_subsys(&tcpv6_net_ops); out_tcpv6_protosw: inet6_unregister_protosw(&tcpv6_protosw); out_tcpv6_protocol: diff --git a/net/mptcp/Kconfig b/net/mptcp/Kconfig new file mode 100644 index 000000000000..c1a99f07a4cd --- /dev/null +++ b/net/mptcp/Kconfig @@ -0,0 +1,16 @@ + +config MPTCP + bool "MPTCP: Multipath TCP" + depends on INET + select SKB_EXTENSIONS + help + Multipath TCP (MPTCP) connections send and receive data over multiple + subflows in order to utilize multiple network paths. Each subflow + uses the TCP protocol, and TCP options carry header information for + MPTCP. + +config MPTCP_IPV6 + bool "MPTCP: IPv6 support for Multipath TCP" + depends on MPTCP + select IPV6 + default y diff --git a/net/mptcp/Makefile b/net/mptcp/Makefile new file mode 100644 index 000000000000..659129d1fcbf --- /dev/null +++ b/net/mptcp/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_MPTCP) += mptcp.o + +mptcp-y := protocol.o diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c new file mode 100644 index 000000000000..5e24e7cf7d70 --- /dev/null +++ b/net/mptcp/protocol.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Multipath TCP + * + * Copyright (c) 2017 - 2019, Intel Corporation. + */ + +#define pr_fmt(fmt) "MPTCP: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "protocol.h" + +static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) +{ + struct mptcp_sock *msk = mptcp_sk(sk); + struct socket *subflow = msk->subflow; + + if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) + return -EOPNOTSUPP; + + return sock_sendmsg(subflow, msg); +} + +static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, + int nonblock, int flags, int *addr_len) +{ + struct mptcp_sock *msk = mptcp_sk(sk); + struct socket *subflow = msk->subflow; + + if (msg->msg_flags & ~(MSG_WAITALL | MSG_DONTWAIT)) + return -EOPNOTSUPP; + + return sock_recvmsg(subflow, msg, flags); +} + +static int mptcp_init_sock(struct sock *sk) +{ + return 0; +} + +static void mptcp_close(struct sock *sk, long timeout) +{ + struct mptcp_sock *msk = mptcp_sk(sk); + + inet_sk_state_store(sk, TCP_CLOSE); + + if (msk->subflow) { + pr_debug("subflow=%p", msk->subflow->sk); + sock_release(msk->subflow); + } + + sock_orphan(sk); + sock_put(sk); +} + +static int mptcp_connect(struct sock *sk, struct sockaddr *saddr, int len) +{ + struct mptcp_sock *msk = mptcp_sk(sk); + int err; + + saddr->sa_family = AF_INET; + + pr_debug("msk=%p, subflow=%p", msk, msk->subflow->sk); + + err = kernel_connect(msk->subflow, saddr, len, 0); + + sk->sk_state = TCP_ESTABLISHED; + + return err; +} + +static struct proto mptcp_prot = { + .name = "MPTCP", + .owner = THIS_MODULE, + .init = mptcp_init_sock, + .close = mptcp_close, + .accept = inet_csk_accept, + .connect = mptcp_connect, + .shutdown = tcp_shutdown, + .sendmsg = mptcp_sendmsg, + .recvmsg = mptcp_recvmsg, + .hash = inet_hash, + .unhash = inet_unhash, + .get_port = inet_csk_get_port, + .obj_size = sizeof(struct mptcp_sock), + .no_autobind = true, +}; + +static struct inet_protosw mptcp_protosw = { + .type = SOCK_STREAM, + .protocol = IPPROTO_MPTCP, + .prot = &mptcp_prot, + .ops = &inet_stream_ops, +}; + +void __init mptcp_init(void) +{ + if (proto_register(&mptcp_prot, 1) != 0) + panic("Failed to register MPTCP proto.\n"); + + inet_register_protosw(&mptcp_protosw); +} + +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +static struct proto mptcp_v6_prot; + +static struct inet_protosw mptcp_v6_protosw = { + .type = SOCK_STREAM, + .protocol = IPPROTO_MPTCP, + .prot = &mptcp_v6_prot, + .ops = &inet6_stream_ops, + .flags = INET_PROTOSW_ICSK, +}; + +int mptcpv6_init(void) +{ + int err; + + mptcp_v6_prot = mptcp_prot; + strcpy(mptcp_v6_prot.name, "MPTCPv6"); + mptcp_v6_prot.slab = NULL; + mptcp_v6_prot.obj_size = sizeof(struct mptcp_sock) + + sizeof(struct ipv6_pinfo); + + err = proto_register(&mptcp_v6_prot, 1); + if (err) + return err; + + err = inet6_register_protosw(&mptcp_v6_protosw); + if (err) + proto_unregister(&mptcp_v6_prot); + + return err; +} +#endif diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h new file mode 100644 index 000000000000..ee04a01bffd3 --- /dev/null +++ b/net/mptcp/protocol.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Multipath TCP + * + * Copyright (c) 2017 - 2019, Intel Corporation. + */ + +#ifndef __MPTCP_PROTOCOL_H +#define __MPTCP_PROTOCOL_H + +/* MPTCP connection sock */ +struct mptcp_sock { + /* inet_connection_sock must be the first member */ + struct inet_connection_sock sk; + struct socket *subflow; /* outgoing connect/listener/!mp_capable */ +}; + +static inline struct mptcp_sock *mptcp_sk(const struct sock *sk) +{ + return (struct mptcp_sock *)sk; +} + +#endif /* __MPTCP_PROTOCOL_H */ -- cgit v1.2.3-71-gd317 From eda7acddf8080bb2d022a8d4b8b2345eb80c63ec Mon Sep 17 00:00:00 2001 From: Peter Krystad Date: Tue, 21 Jan 2020 16:56:16 -0800 Subject: mptcp: Handle MPTCP TCP options Add hooks to parse and format the MP_CAPABLE option. This option is handled according to MPTCP version 0 (RFC6824). MPTCP version 1 MP_CAPABLE (RFC6824bis/RFC8684) will be added later in coordination with related code changes. Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Co-developed-by: Davide Caratti Signed-off-by: Davide Caratti Signed-off-by: Peter Krystad Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/linux/tcp.h | 18 ++++++++++ include/net/mptcp.h | 18 ++++++++++ net/ipv4/tcp_input.c | 5 +++ net/ipv4/tcp_output.c | 13 +++++++ net/mptcp/Makefile | 2 +- net/mptcp/options.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++ net/mptcp/protocol.h | 29 +++++++++++++++ 7 files changed, 181 insertions(+), 1 deletion(-) create mode 100644 net/mptcp/options.c (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index ca6f01531e64..52798ab00394 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -78,6 +78,16 @@ struct tcp_sack_block { #define TCP_SACK_SEEN (1 << 0) /*1 = peer is SACK capable, */ #define TCP_DSACK_SEEN (1 << 2) /*1 = DSACK was received from peer*/ +#if IS_ENABLED(CONFIG_MPTCP) +struct mptcp_options_received { + u64 sndr_key; + u64 rcvr_key; + u8 mp_capable : 1, + mp_join : 1, + dss : 1; +}; +#endif + struct tcp_options_received { /* PAWS/RTTM data */ int ts_recent_stamp;/* Time we stored ts_recent (for aging) */ @@ -95,6 +105,9 @@ struct tcp_options_received { u8 num_sacks; /* Number of SACK blocks */ u16 user_mss; /* mss requested by user in ioctl */ u16 mss_clamp; /* Maximal mss, negotiated at connection setup */ +#if IS_ENABLED(CONFIG_MPTCP) + struct mptcp_options_received mptcp; +#endif }; static inline void tcp_clear_options(struct tcp_options_received *rx_opt) @@ -104,6 +117,11 @@ static inline void tcp_clear_options(struct tcp_options_received *rx_opt) #if IS_ENABLED(CONFIG_SMC) rx_opt->smc_ok = 0; #endif +#if IS_ENABLED(CONFIG_MPTCP) + rx_opt->mptcp.mp_capable = 0; + rx_opt->mptcp.mp_join = 0; + rx_opt->mptcp.dss = 0; +#endif } /* This is the max number of SACKS that we'll generate and process. It's safe diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 98ba22379117..3daec2ceb3ff 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -9,6 +9,7 @@ #define __NET_MPTCP_H #include +#include #include /* MPTCP sk_buff extension data */ @@ -26,10 +27,22 @@ struct mptcp_ext { /* one byte hole */ }; +struct mptcp_out_options { +#if IS_ENABLED(CONFIG_MPTCP) + u16 suboptions; + u64 sndr_key; + u64 rcvr_key; +#endif +}; + #ifdef CONFIG_MPTCP void mptcp_init(void); +void mptcp_parse_option(const unsigned char *ptr, int opsize, + struct tcp_options_received *opt_rx); +void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts); + /* move the skb extension owership, with the assumption that 'to' is * newly allocated */ @@ -76,6 +89,11 @@ static inline void mptcp_init(void) { } +static inline void mptcp_parse_option(const unsigned char *ptr, int opsize, + struct tcp_options_received *opt_rx) +{ +} + static inline void mptcp_skb_ext_move(struct sk_buff *to, const struct sk_buff *from) { diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 358365598216..3458ee13e6f0 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -79,6 +79,7 @@ #include #include #include +#include int sysctl_tcp_max_orphans __read_mostly = NR_FILE; @@ -3924,6 +3925,10 @@ void tcp_parse_options(const struct net *net, */ break; #endif + case TCPOPT_MPTCP: + mptcp_parse_option(ptr, opsize, opt_rx); + break; + case TCPOPT_FASTOPEN: tcp_parse_fastopen_option( opsize - TCPOLEN_FASTOPEN_BASE, diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 786978cb2db7..0f0984f39f67 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -38,6 +38,7 @@ #define pr_fmt(fmt) "TCP: " fmt #include +#include #include #include @@ -414,6 +415,7 @@ static inline bool tcp_urg_mode(const struct tcp_sock *tp) #define OPTION_WSCALE (1 << 3) #define OPTION_FAST_OPEN_COOKIE (1 << 8) #define OPTION_SMC (1 << 9) +#define OPTION_MPTCP (1 << 10) static void smc_options_write(__be32 *ptr, u16 *options) { @@ -439,8 +441,17 @@ struct tcp_out_options { __u8 *hash_location; /* temporary pointer, overloaded */ __u32 tsval, tsecr; /* need to include OPTION_TS */ struct tcp_fastopen_cookie *fastopen_cookie; /* Fast open cookie */ + struct mptcp_out_options mptcp; }; +static void mptcp_options_write(__be32 *ptr, struct tcp_out_options *opts) +{ +#if IS_ENABLED(CONFIG_MPTCP) + if (unlikely(OPTION_MPTCP & opts->options)) + mptcp_write_options(ptr, &opts->mptcp); +#endif +} + /* Write previously computed TCP options to the packet. * * Beware: Something in the Internet is very sensitive to the ordering of @@ -549,6 +560,8 @@ static void tcp_options_write(__be32 *ptr, struct tcp_sock *tp, } smc_options_write(ptr, &options); + + mptcp_options_write(ptr, opts); } static void smc_set_option(const struct tcp_sock *tp, diff --git a/net/mptcp/Makefile b/net/mptcp/Makefile index 659129d1fcbf..27a846263f08 100644 --- a/net/mptcp/Makefile +++ b/net/mptcp/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_MPTCP) += mptcp.o -mptcp-y := protocol.o +mptcp-y := protocol.o options.o diff --git a/net/mptcp/options.c b/net/mptcp/options.c new file mode 100644 index 000000000000..b7a31c0e5283 --- /dev/null +++ b/net/mptcp/options.c @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Multipath TCP + * + * Copyright (c) 2017 - 2019, Intel Corporation. + */ + +#include +#include +#include +#include "protocol.h" + +void mptcp_parse_option(const unsigned char *ptr, int opsize, + struct tcp_options_received *opt_rx) +{ + struct mptcp_options_received *mp_opt = &opt_rx->mptcp; + u8 subtype = *ptr >> 4; + u8 version; + u8 flags; + + switch (subtype) { + case MPTCPOPT_MP_CAPABLE: + if (opsize != TCPOLEN_MPTCP_MPC_SYN && + opsize != TCPOLEN_MPTCP_MPC_ACK) + break; + + version = *ptr++ & MPTCP_VERSION_MASK; + if (version != MPTCP_SUPPORTED_VERSION) + break; + + flags = *ptr++; + if (!((flags & MPTCP_CAP_FLAG_MASK) == MPTCP_CAP_HMAC_SHA1) || + (flags & MPTCP_CAP_EXTENSIBILITY)) + break; + + /* RFC 6824, Section 3.1: + * "For the Checksum Required bit (labeled "A"), if either + * host requires the use of checksums, checksums MUST be used. + * In other words, the only way for checksums not to be used + * is if both hosts in their SYNs set A=0." + * + * Section 3.3.0: + * "If a checksum is not present when its use has been + * negotiated, the receiver MUST close the subflow with a RST as + * it is considered broken." + * + * We don't implement DSS checksum - fall back to TCP. + */ + if (flags & MPTCP_CAP_CHECKSUM_REQD) + break; + + mp_opt->mp_capable = 1; + mp_opt->sndr_key = get_unaligned_be64(ptr); + ptr += 8; + + if (opsize == TCPOLEN_MPTCP_MPC_ACK) { + mp_opt->rcvr_key = get_unaligned_be64(ptr); + ptr += 8; + pr_debug("MP_CAPABLE sndr=%llu, rcvr=%llu", + mp_opt->sndr_key, mp_opt->rcvr_key); + } else { + pr_debug("MP_CAPABLE sndr=%llu", mp_opt->sndr_key); + } + break; + + case MPTCPOPT_DSS: + pr_debug("DSS"); + mp_opt->dss = 1; + break; + + default: + break; + } +} + +void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) +{ + if ((OPTION_MPTCP_MPC_SYN | + OPTION_MPTCP_MPC_ACK) & opts->suboptions) { + u8 len; + + if (OPTION_MPTCP_MPC_SYN & opts->suboptions) + len = TCPOLEN_MPTCP_MPC_SYN; + else + len = TCPOLEN_MPTCP_MPC_ACK; + + *ptr++ = htonl((TCPOPT_MPTCP << 24) | (len << 16) | + (MPTCPOPT_MP_CAPABLE << 12) | + (MPTCP_SUPPORTED_VERSION << 8) | + MPTCP_CAP_HMAC_SHA1); + put_unaligned_be64(opts->sndr_key, ptr); + ptr += 2; + if (OPTION_MPTCP_MPC_ACK & opts->suboptions) { + put_unaligned_be64(opts->rcvr_key, ptr); + ptr += 2; + } + } +} diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index ee04a01bffd3..c59cf8b220b0 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -7,6 +7,35 @@ #ifndef __MPTCP_PROTOCOL_H #define __MPTCP_PROTOCOL_H +#define MPTCP_SUPPORTED_VERSION 0 + +/* MPTCP option bits */ +#define OPTION_MPTCP_MPC_SYN BIT(0) +#define OPTION_MPTCP_MPC_SYNACK BIT(1) +#define OPTION_MPTCP_MPC_ACK BIT(2) + +/* MPTCP option subtypes */ +#define MPTCPOPT_MP_CAPABLE 0 +#define MPTCPOPT_MP_JOIN 1 +#define MPTCPOPT_DSS 2 +#define MPTCPOPT_ADD_ADDR 3 +#define MPTCPOPT_RM_ADDR 4 +#define MPTCPOPT_MP_PRIO 5 +#define MPTCPOPT_MP_FAIL 6 +#define MPTCPOPT_MP_FASTCLOSE 7 + +/* MPTCP suboption lengths */ +#define TCPOLEN_MPTCP_MPC_SYN 12 +#define TCPOLEN_MPTCP_MPC_SYNACK 12 +#define TCPOLEN_MPTCP_MPC_ACK 20 + +/* MPTCP MP_CAPABLE flags */ +#define MPTCP_VERSION_MASK (0x0F) +#define MPTCP_CAP_CHECKSUM_REQD BIT(7) +#define MPTCP_CAP_EXTENSIBILITY BIT(6) +#define MPTCP_CAP_HMAC_SHA1 BIT(0) +#define MPTCP_CAP_FLAG_MASK (0x3F) + /* MPTCP connection sock */ struct mptcp_sock { /* inet_connection_sock must be the first member */ -- cgit v1.2.3-71-gd317 From 2303f994b3e187091fd08148066688b08f837efc Mon Sep 17 00:00:00 2001 From: Peter Krystad Date: Tue, 21 Jan 2020 16:56:17 -0800 Subject: mptcp: Associate MPTCP context with TCP socket Use ULP to associate a subflow_context structure with each TCP subflow socket. Creating these sockets requires new bind and connect functions to make sure ULP is set up immediately when the subflow sockets are created. Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Co-developed-by: Davide Caratti Signed-off-by: Davide Caratti Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Signed-off-by: Peter Krystad Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/linux/tcp.h | 3 ++ net/mptcp/Makefile | 2 +- net/mptcp/protocol.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++--- net/mptcp/protocol.h | 26 ++++++++++ net/mptcp/subflow.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 275 insertions(+), 7 deletions(-) create mode 100644 net/mptcp/subflow.c (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 52798ab00394..877947475814 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -397,6 +397,9 @@ struct tcp_sock { u32 mtu_info; /* We received an ICMP_FRAG_NEEDED / ICMPV6_PKT_TOOBIG * while socket was owned by user. */ +#if IS_ENABLED(CONFIG_MPTCP) + bool is_mptcp; +#endif #ifdef CONFIG_TCP_MD5SIG /* TCP AF-Specific parts; only used by MD5 Signature support so far */ diff --git a/net/mptcp/Makefile b/net/mptcp/Makefile index 27a846263f08..e1ee5aade8b0 100644 --- a/net/mptcp/Makefile +++ b/net/mptcp/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_MPTCP) += mptcp.o -mptcp-y := protocol.o options.o +mptcp-y := protocol.o subflow.o options.o diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 5e24e7cf7d70..294b03a0393a 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -17,6 +17,53 @@ #include #include "protocol.h" +#define MPTCP_SAME_STATE TCP_MAX_STATES + +/* If msk has an initial subflow socket, and the MP_CAPABLE handshake has not + * completed yet or has failed, return the subflow socket. + * Otherwise return NULL. + */ +static struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk) +{ + if (!msk->subflow) + return NULL; + + return msk->subflow; +} + +static bool __mptcp_can_create_subflow(const struct mptcp_sock *msk) +{ + return ((struct sock *)msk)->sk_state == TCP_CLOSE; +} + +static struct socket *__mptcp_socket_create(struct mptcp_sock *msk, int state) +{ + struct mptcp_subflow_context *subflow; + struct sock *sk = (struct sock *)msk; + struct socket *ssock; + int err; + + ssock = __mptcp_nmpc_socket(msk); + if (ssock) + goto set_state; + + if (!__mptcp_can_create_subflow(msk)) + return ERR_PTR(-EINVAL); + + err = mptcp_subflow_create_socket(sk, &ssock); + if (err) + return ERR_PTR(err); + + msk->subflow = ssock; + subflow = mptcp_subflow_ctx(ssock->sk); + subflow->request_mptcp = 1; + +set_state: + if (state != MPTCP_SAME_STATE) + inet_sk_state_store(sk, state); + return ssock; +} + static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); @@ -48,12 +95,14 @@ static int mptcp_init_sock(struct sock *sk) static void mptcp_close(struct sock *sk, long timeout) { struct mptcp_sock *msk = mptcp_sk(sk); + struct socket *ssock; inet_sk_state_store(sk, TCP_CLOSE); - if (msk->subflow) { - pr_debug("subflow=%p", msk->subflow->sk); - sock_release(msk->subflow); + ssock = __mptcp_nmpc_socket(msk); + if (ssock) { + pr_debug("subflow=%p", mptcp_subflow_ctx(ssock->sk)); + sock_release(ssock); } sock_orphan(sk); @@ -67,7 +116,8 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *saddr, int len) saddr->sa_family = AF_INET; - pr_debug("msk=%p, subflow=%p", msk, msk->subflow->sk); + pr_debug("msk=%p, subflow=%p", msk, + mptcp_subflow_ctx(msk->subflow->sk)); err = kernel_connect(msk->subflow, saddr, len, 0); @@ -93,15 +143,79 @@ static struct proto mptcp_prot = { .no_autobind = true, }; +static int mptcp_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) +{ + struct mptcp_sock *msk = mptcp_sk(sock->sk); + struct socket *ssock; + int err = -ENOTSUPP; + + if (uaddr->sa_family != AF_INET) // @@ allow only IPv4 for now + return err; + + lock_sock(sock->sk); + ssock = __mptcp_socket_create(msk, MPTCP_SAME_STATE); + if (IS_ERR(ssock)) { + err = PTR_ERR(ssock); + goto unlock; + } + + err = ssock->ops->bind(ssock, uaddr, addr_len); + +unlock: + release_sock(sock->sk); + return err; +} + +static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags) +{ + struct mptcp_sock *msk = mptcp_sk(sock->sk); + struct socket *ssock; + int err; + + lock_sock(sock->sk); + ssock = __mptcp_socket_create(msk, TCP_SYN_SENT); + if (IS_ERR(ssock)) { + err = PTR_ERR(ssock); + goto unlock; + } + + err = ssock->ops->connect(ssock, uaddr, addr_len, flags); + inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk)); + +unlock: + release_sock(sock->sk); + return err; +} + +static __poll_t mptcp_poll(struct file *file, struct socket *sock, + struct poll_table_struct *wait) +{ + __poll_t mask = 0; + + return mask; +} + +static struct proto_ops mptcp_stream_ops; + static struct inet_protosw mptcp_protosw = { .type = SOCK_STREAM, .protocol = IPPROTO_MPTCP, .prot = &mptcp_prot, - .ops = &inet_stream_ops, + .ops = &mptcp_stream_ops, + .flags = INET_PROTOSW_ICSK, }; void __init mptcp_init(void) { + mptcp_prot.h.hashinfo = tcp_prot.h.hashinfo; + mptcp_stream_ops = inet_stream_ops; + mptcp_stream_ops.bind = mptcp_bind; + mptcp_stream_ops.connect = mptcp_stream_connect; + mptcp_stream_ops.poll = mptcp_poll; + + mptcp_subflow_init(); + if (proto_register(&mptcp_prot, 1) != 0) panic("Failed to register MPTCP proto.\n"); @@ -109,13 +223,14 @@ void __init mptcp_init(void) } #if IS_ENABLED(CONFIG_MPTCP_IPV6) +static struct proto_ops mptcp_v6_stream_ops; static struct proto mptcp_v6_prot; static struct inet_protosw mptcp_v6_protosw = { .type = SOCK_STREAM, .protocol = IPPROTO_MPTCP, .prot = &mptcp_v6_prot, - .ops = &inet6_stream_ops, + .ops = &mptcp_v6_stream_ops, .flags = INET_PROTOSW_ICSK, }; @@ -133,6 +248,11 @@ int mptcpv6_init(void) if (err) return err; + mptcp_v6_stream_ops = inet6_stream_ops; + mptcp_v6_stream_ops.bind = mptcp_bind; + mptcp_v6_stream_ops.connect = mptcp_stream_connect; + mptcp_v6_stream_ops.poll = mptcp_poll; + err = inet6_register_protosw(&mptcp_v6_protosw); if (err) proto_unregister(&mptcp_v6_prot); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index c59cf8b220b0..543d4d5d8985 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -48,4 +48,30 @@ static inline struct mptcp_sock *mptcp_sk(const struct sock *sk) return (struct mptcp_sock *)sk; } +/* MPTCP subflow context */ +struct mptcp_subflow_context { + u32 request_mptcp : 1; /* send MP_CAPABLE */ + struct sock *tcp_sock; /* tcp sk backpointer */ + struct sock *conn; /* parent mptcp_sock */ + struct rcu_head rcu; +}; + +static inline struct mptcp_subflow_context * +mptcp_subflow_ctx(const struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + + /* Use RCU on icsk_ulp_data only for sock diag code */ + return (__force struct mptcp_subflow_context *)icsk->icsk_ulp_data; +} + +static inline struct sock * +mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow) +{ + return subflow->tcp_sock; +} + +void mptcp_subflow_init(void); +int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock); + #endif /* __MPTCP_PROTOCOL_H */ diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c new file mode 100644 index 000000000000..bf8139353653 --- /dev/null +++ b/net/mptcp/subflow.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Multipath TCP + * + * Copyright (c) 2017 - 2019, Intel Corporation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "protocol.h" + +int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock) +{ + struct mptcp_subflow_context *subflow; + struct net *net = sock_net(sk); + struct socket *sf; + int err; + + err = sock_create_kern(net, PF_INET, SOCK_STREAM, IPPROTO_TCP, &sf); + if (err) + return err; + + lock_sock(sf->sk); + + /* kernel sockets do not by default acquire net ref, but TCP timer + * needs it. + */ + sf->sk->sk_net_refcnt = 1; + get_net(net); + this_cpu_add(*net->core.sock_inuse, 1); + err = tcp_set_ulp(sf->sk, "mptcp"); + release_sock(sf->sk); + + if (err) + return err; + + subflow = mptcp_subflow_ctx(sf->sk); + pr_debug("subflow=%p", subflow); + + *new_sock = sf; + subflow->conn = sk; + + return 0; +} + +static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk, + gfp_t priority) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + struct mptcp_subflow_context *ctx; + + ctx = kzalloc(sizeof(*ctx), priority); + if (!ctx) + return NULL; + + rcu_assign_pointer(icsk->icsk_ulp_data, ctx); + + pr_debug("subflow=%p", ctx); + + ctx->tcp_sock = sk; + + return ctx; +} + +static int subflow_ulp_init(struct sock *sk) +{ + struct mptcp_subflow_context *ctx; + struct tcp_sock *tp = tcp_sk(sk); + int err = 0; + + /* disallow attaching ULP to a socket unless it has been + * created with sock_create_kern() + */ + if (!sk->sk_kern_sock) { + err = -EOPNOTSUPP; + goto out; + } + + ctx = subflow_create_ctx(sk, GFP_KERNEL); + if (!ctx) { + err = -ENOMEM; + goto out; + } + + pr_debug("subflow=%p, family=%d", ctx, sk->sk_family); + + tp->is_mptcp = 1; +out: + return err; +} + +static void subflow_ulp_release(struct sock *sk) +{ + struct mptcp_subflow_context *ctx = mptcp_subflow_ctx(sk); + + if (!ctx) + return; + + kfree_rcu(ctx, rcu); +} + +static struct tcp_ulp_ops subflow_ulp_ops __read_mostly = { + .name = "mptcp", + .owner = THIS_MODULE, + .init = subflow_ulp_init, + .release = subflow_ulp_release, +}; + +void mptcp_subflow_init(void) +{ + if (tcp_register_ulp(&subflow_ulp_ops) != 0) + panic("MPTCP: failed to register subflows to ULP\n"); +} -- cgit v1.2.3-71-gd317 From cec37a6e41aae7bf3df9a3da783380a4d9325fd8 Mon Sep 17 00:00:00 2001 From: Peter Krystad Date: Tue, 21 Jan 2020 16:56:18 -0800 Subject: mptcp: Handle MP_CAPABLE options for outgoing connections Add hooks to tcp_output.c to add MP_CAPABLE to an outgoing SYN request, to capture the MP_CAPABLE in the received SYN-ACK, to add MP_CAPABLE to the final ACK of the three-way handshake. Use the .sk_rx_dst_set() handler in the subflow proto to capture when the responding SYN-ACK is received and notify the MPTCP connection layer. Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Signed-off-by: Peter Krystad Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/linux/tcp.h | 3 + include/net/mptcp.h | 57 +++++++++++ net/ipv4/tcp_input.c | 6 ++ net/ipv4/tcp_output.c | 44 +++++++++ net/ipv6/tcp_ipv6.c | 6 ++ net/mptcp/options.c | 100 +++++++++++++++++++ net/mptcp/protocol.c | 163 +++++++++++++++++++++++++----- net/mptcp/protocol.h | 40 +++++++- net/mptcp/subflow.c | 268 +++++++++++++++++++++++++++++++++++++++++++++++++- 9 files changed, 663 insertions(+), 24 deletions(-) (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 877947475814..e9ee06d887fa 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -137,6 +137,9 @@ struct tcp_request_sock { const struct tcp_request_sock_ops *af_specific; u64 snt_synack; /* first SYNACK sent time */ bool tfo_listener; +#if IS_ENABLED(CONFIG_MPTCP) + bool is_mptcp; +#endif u32 txhash; u32 rcv_isn; u32 snt_isn; diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 3daec2ceb3ff..eabc57c3fde4 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -39,8 +39,27 @@ struct mptcp_out_options { void mptcp_init(void); +static inline bool sk_is_mptcp(const struct sock *sk) +{ + return tcp_sk(sk)->is_mptcp; +} + +static inline bool rsk_is_mptcp(const struct request_sock *req) +{ + return tcp_rsk(req)->is_mptcp; +} + void mptcp_parse_option(const unsigned char *ptr, int opsize, struct tcp_options_received *opt_rx); +bool mptcp_syn_options(struct sock *sk, unsigned int *size, + struct mptcp_out_options *opts); +void mptcp_rcv_synsent(struct sock *sk); +bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, + struct mptcp_out_options *opts); +bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, + unsigned int *size, unsigned int remaining, + struct mptcp_out_options *opts); + void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts); /* move the skb extension owership, with the assumption that 'to' is @@ -89,11 +108,47 @@ static inline void mptcp_init(void) { } +static inline bool sk_is_mptcp(const struct sock *sk) +{ + return false; +} + +static inline bool rsk_is_mptcp(const struct request_sock *req) +{ + return false; +} + static inline void mptcp_parse_option(const unsigned char *ptr, int opsize, struct tcp_options_received *opt_rx) { } +static inline bool mptcp_syn_options(struct sock *sk, unsigned int *size, + struct mptcp_out_options *opts) +{ + return false; +} + +static inline void mptcp_rcv_synsent(struct sock *sk) +{ +} + +static inline bool mptcp_synack_options(const struct request_sock *req, + unsigned int *size, + struct mptcp_out_options *opts) +{ + return false; +} + +static inline bool mptcp_established_options(struct sock *sk, + struct sk_buff *skb, + unsigned int *size, + unsigned int remaining, + struct mptcp_out_options *opts) +{ + return false; +} + static inline void mptcp_skb_ext_move(struct sk_buff *to, const struct sk_buff *from) { @@ -107,6 +162,8 @@ static inline bool mptcp_skb_can_collapse(const struct sk_buff *to, #endif /* CONFIG_MPTCP */ +void mptcp_handle_ipv6_mapped(struct sock *sk, bool mapped); + #if IS_ENABLED(CONFIG_MPTCP_IPV6) int mptcpv6_init(void); #elif IS_ENABLED(CONFIG_IPV6) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 3458ee13e6f0..5165c8de47ee 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5978,6 +5978,9 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, tcp_sync_mss(sk, icsk->icsk_pmtu_cookie); tcp_initialize_rcv_mss(sk); + if (sk_is_mptcp(sk)) + mptcp_rcv_synsent(sk); + /* Remember, tcp_poll() does not lock socket! * Change state from SYN-SENT only after copied_seq * is initialized. */ @@ -6600,6 +6603,9 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, tcp_rsk(req)->af_specific = af_ops; tcp_rsk(req)->ts_off = 0; +#if IS_ENABLED(CONFIG_MPTCP) + tcp_rsk(req)->is_mptcp = 0; +#endif tcp_clear_options(&tmp_opt); tmp_opt.mss_clamp = af_ops->mss_clamp; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 0f0984f39f67..5456076166da 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -597,6 +597,22 @@ static void smc_set_option_cond(const struct tcp_sock *tp, #endif } +static void mptcp_set_option_cond(const struct request_sock *req, + struct tcp_out_options *opts, + unsigned int *remaining) +{ + if (rsk_is_mptcp(req)) { + unsigned int size; + + if (mptcp_synack_options(req, &size, &opts->mptcp)) { + if (*remaining >= size) { + opts->options |= OPTION_MPTCP; + *remaining -= size; + } + } + } +} + /* Compute TCP options for SYN packets. This is not the final * network wire format yet. */ @@ -666,6 +682,15 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, smc_set_option(tp, opts, &remaining); + if (sk_is_mptcp(sk)) { + unsigned int size; + + if (mptcp_syn_options(sk, &size, &opts->mptcp)) { + opts->options |= OPTION_MPTCP; + remaining -= size; + } + } + return MAX_TCP_OPTION_SPACE - remaining; } @@ -727,6 +752,8 @@ static unsigned int tcp_synack_options(const struct sock *sk, } } + mptcp_set_option_cond(req, opts, &remaining); + smc_set_option_cond(tcp_sk(sk), ireq, opts, &remaining); return MAX_TCP_OPTION_SPACE - remaining; @@ -764,6 +791,23 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb size += TCPOLEN_TSTAMP_ALIGNED; } + /* MPTCP options have precedence over SACK for the limited TCP + * option space because a MPTCP connection would be forced to + * fall back to regular TCP if a required multipath option is + * missing. SACK still gets a chance to use whatever space is + * left. + */ + if (sk_is_mptcp(sk)) { + unsigned int remaining = MAX_TCP_OPTION_SPACE - size; + unsigned int opt_size = 0; + + if (mptcp_established_options(sk, skb, &opt_size, remaining, + &opts->mptcp)) { + opts->options |= OPTION_MPTCP; + size += opt_size; + } + } + eff_sacks = tp->rx_opt.num_sacks + tp->rx_opt.dsack; if (unlikely(eff_sacks)) { const unsigned int remaining = MAX_TCP_OPTION_SPACE - size; diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 60068ffde1d9..33a578a3eb3a 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -238,6 +238,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr, sin.sin_addr.s_addr = usin->sin6_addr.s6_addr32[3]; icsk->icsk_af_ops = &ipv6_mapped; + if (sk_is_mptcp(sk)) + mptcp_handle_ipv6_mapped(sk, true); sk->sk_backlog_rcv = tcp_v4_do_rcv; #ifdef CONFIG_TCP_MD5SIG tp->af_specific = &tcp_sock_ipv6_mapped_specific; @@ -248,6 +250,8 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr, if (err) { icsk->icsk_ext_hdr_len = exthdrlen; icsk->icsk_af_ops = &ipv6_specific; + if (sk_is_mptcp(sk)) + mptcp_handle_ipv6_mapped(sk, false); sk->sk_backlog_rcv = tcp_v6_do_rcv; #ifdef CONFIG_TCP_MD5SIG tp->af_specific = &tcp_sock_ipv6_specific; @@ -1203,6 +1207,8 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * newnp->saddr = newsk->sk_v6_rcv_saddr; inet_csk(newsk)->icsk_af_ops = &ipv6_mapped; + if (sk_is_mptcp(newsk)) + mptcp_handle_ipv6_mapped(newsk, true); newsk->sk_backlog_rcv = tcp_v4_do_rcv; #ifdef CONFIG_TCP_MD5SIG newtp->af_specific = &tcp_sock_ipv6_mapped_specific; diff --git a/net/mptcp/options.c b/net/mptcp/options.c index b7a31c0e5283..52ff2301b68b 100644 --- a/net/mptcp/options.c +++ b/net/mptcp/options.c @@ -72,14 +72,114 @@ void mptcp_parse_option(const unsigned char *ptr, int opsize, } } +void mptcp_get_options(const struct sk_buff *skb, + struct tcp_options_received *opt_rx) +{ + const unsigned char *ptr; + const struct tcphdr *th = tcp_hdr(skb); + int length = (th->doff * 4) - sizeof(struct tcphdr); + + ptr = (const unsigned char *)(th + 1); + + while (length > 0) { + int opcode = *ptr++; + int opsize; + + switch (opcode) { + case TCPOPT_EOL: + return; + case TCPOPT_NOP: /* Ref: RFC 793 section 3.1 */ + length--; + continue; + default: + opsize = *ptr++; + if (opsize < 2) /* "silly options" */ + return; + if (opsize > length) + return; /* don't parse partial options */ + if (opcode == TCPOPT_MPTCP) + mptcp_parse_option(ptr, opsize, opt_rx); + ptr += opsize - 2; + length -= opsize; + } + } +} + +bool mptcp_syn_options(struct sock *sk, unsigned int *size, + struct mptcp_out_options *opts) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + + if (subflow->request_mptcp) { + pr_debug("local_key=%llu", subflow->local_key); + opts->suboptions = OPTION_MPTCP_MPC_SYN; + opts->sndr_key = subflow->local_key; + *size = TCPOLEN_MPTCP_MPC_SYN; + return true; + } + return false; +} + +void mptcp_rcv_synsent(struct sock *sk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct tcp_sock *tp = tcp_sk(sk); + + pr_debug("subflow=%p", subflow); + if (subflow->request_mptcp && tp->rx_opt.mptcp.mp_capable) { + subflow->mp_capable = 1; + subflow->remote_key = tp->rx_opt.mptcp.sndr_key; + } else { + tcp_sk(sk)->is_mptcp = 0; + } +} + +bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, + unsigned int *size, unsigned int remaining, + struct mptcp_out_options *opts) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + + if (subflow->mp_capable && !subflow->fourth_ack) { + opts->suboptions = OPTION_MPTCP_MPC_ACK; + opts->sndr_key = subflow->local_key; + opts->rcvr_key = subflow->remote_key; + *size = TCPOLEN_MPTCP_MPC_ACK; + subflow->fourth_ack = 1; + pr_debug("subflow=%p, local_key=%llu, remote_key=%llu", + subflow, subflow->local_key, subflow->remote_key); + return true; + } + return false; +} + +bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, + struct mptcp_out_options *opts) +{ + struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); + + if (subflow_req->mp_capable) { + opts->suboptions = OPTION_MPTCP_MPC_SYNACK; + opts->sndr_key = subflow_req->local_key; + *size = TCPOLEN_MPTCP_MPC_SYNACK; + pr_debug("subflow_req=%p, local_key=%llu", + subflow_req, subflow_req->local_key); + return true; + } + return false; +} + void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) { if ((OPTION_MPTCP_MPC_SYN | + OPTION_MPTCP_MPC_SYNACK | OPTION_MPTCP_MPC_ACK) & opts->suboptions) { u8 len; if (OPTION_MPTCP_MPC_SYN & opts->suboptions) len = TCPOLEN_MPTCP_MPC_SYN; + else if (OPTION_MPTCP_MPC_SYNACK & opts->suboptions) + len = TCPOLEN_MPTCP_MPC_SYNACK; else len = TCPOLEN_MPTCP_MPC_ACK; diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 294b03a0393a..bdd58da1e4f6 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -25,12 +25,28 @@ */ static struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk) { - if (!msk->subflow) + if (!msk->subflow || mptcp_subflow_ctx(msk->subflow->sk)->fourth_ack) return NULL; return msk->subflow; } +/* if msk has a single subflow, and the mp_capable handshake is failed, + * return it. + * Otherwise returns NULL + */ +static struct socket *__mptcp_tcp_fallback(const struct mptcp_sock *msk) +{ + struct socket *ssock = __mptcp_nmpc_socket(msk); + + sock_owned_by_me((const struct sock *)msk); + + if (!ssock || sk_is_mptcp(ssock->sk)) + return NULL; + + return ssock; +} + static bool __mptcp_can_create_subflow(const struct mptcp_sock *msk) { return ((struct sock *)msk)->sk_state == TCP_CLOSE; @@ -56,6 +72,7 @@ static struct socket *__mptcp_socket_create(struct mptcp_sock *msk, int state) msk->subflow = ssock; subflow = mptcp_subflow_ctx(ssock->sk); + list_add(&subflow->node, &msk->conn_list); subflow->request_mptcp = 1; set_state: @@ -64,66 +81,169 @@ set_state: return ssock; } +static struct sock *mptcp_subflow_get(const struct mptcp_sock *msk) +{ + struct mptcp_subflow_context *subflow; + + sock_owned_by_me((const struct sock *)msk); + + mptcp_for_each_subflow(msk, subflow) { + return mptcp_subflow_tcp_sock(subflow); + } + + return NULL; +} + static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *subflow = msk->subflow; + struct socket *ssock; + struct sock *ssk; + int ret; if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) return -EOPNOTSUPP; - return sock_sendmsg(subflow, msg); + lock_sock(sk); + ssock = __mptcp_tcp_fallback(msk); + if (ssock) { + pr_debug("fallback passthrough"); + ret = sock_sendmsg(ssock, msg); + release_sock(sk); + return ret; + } + + ssk = mptcp_subflow_get(msk); + if (!ssk) { + release_sock(sk); + return -ENOTCONN; + } + + ret = sock_sendmsg(ssk->sk_socket, msg); + + release_sock(sk); + return ret; } static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, int flags, int *addr_len) { struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *subflow = msk->subflow; + struct socket *ssock; + struct sock *ssk; + int copied = 0; if (msg->msg_flags & ~(MSG_WAITALL | MSG_DONTWAIT)) return -EOPNOTSUPP; - return sock_recvmsg(subflow, msg, flags); + lock_sock(sk); + ssock = __mptcp_tcp_fallback(msk); + if (ssock) { + pr_debug("fallback-read subflow=%p", + mptcp_subflow_ctx(ssock->sk)); + copied = sock_recvmsg(ssock, msg, flags); + release_sock(sk); + return copied; + } + + ssk = mptcp_subflow_get(msk); + if (!ssk) { + release_sock(sk); + return -ENOTCONN; + } + + copied = sock_recvmsg(ssk->sk_socket, msg, flags); + + release_sock(sk); + + return copied; +} + +/* subflow sockets can be either outgoing (connect) or incoming + * (accept). + * + * Outgoing subflows use in-kernel sockets. + * Incoming subflows do not have their own 'struct socket' allocated, + * so we need to use tcp_close() after detaching them from the mptcp + * parent socket. + */ +static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk, + struct mptcp_subflow_context *subflow, + long timeout) +{ + struct socket *sock = READ_ONCE(ssk->sk_socket); + + list_del(&subflow->node); + + if (sock && sock != sk->sk_socket) { + /* outgoing subflow */ + sock_release(sock); + } else { + /* incoming subflow */ + tcp_close(ssk, timeout); + } } static int mptcp_init_sock(struct sock *sk) { + struct mptcp_sock *msk = mptcp_sk(sk); + + INIT_LIST_HEAD(&msk->conn_list); + return 0; } static void mptcp_close(struct sock *sk, long timeout) { + struct mptcp_subflow_context *subflow, *tmp; struct mptcp_sock *msk = mptcp_sk(sk); - struct socket *ssock; inet_sk_state_store(sk, TCP_CLOSE); - ssock = __mptcp_nmpc_socket(msk); - if (ssock) { - pr_debug("subflow=%p", mptcp_subflow_ctx(ssock->sk)); - sock_release(ssock); + lock_sock(sk); + + list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) { + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + + __mptcp_close_ssk(sk, ssk, subflow, timeout); } - sock_orphan(sk); - sock_put(sk); + release_sock(sk); + sk_common_release(sk); } -static int mptcp_connect(struct sock *sk, struct sockaddr *saddr, int len) +static int mptcp_get_port(struct sock *sk, unsigned short snum) { struct mptcp_sock *msk = mptcp_sk(sk); - int err; + struct socket *ssock; - saddr->sa_family = AF_INET; + ssock = __mptcp_nmpc_socket(msk); + pr_debug("msk=%p, subflow=%p", msk, ssock); + if (WARN_ON_ONCE(!ssock)) + return -EINVAL; - pr_debug("msk=%p, subflow=%p", msk, - mptcp_subflow_ctx(msk->subflow->sk)); + return inet_csk_get_port(ssock->sk, snum); +} - err = kernel_connect(msk->subflow, saddr, len, 0); +void mptcp_finish_connect(struct sock *ssk) +{ + struct mptcp_subflow_context *subflow; + struct mptcp_sock *msk; + struct sock *sk; - sk->sk_state = TCP_ESTABLISHED; + subflow = mptcp_subflow_ctx(ssk); - return err; + if (!subflow->mp_capable) + return; + + sk = subflow->conn; + msk = mptcp_sk(sk); + + /* the socket is not connected yet, no msk/subflow ops can access/race + * accessing the field below + */ + WRITE_ONCE(msk->remote_key, subflow->remote_key); + WRITE_ONCE(msk->local_key, subflow->local_key); } static struct proto mptcp_prot = { @@ -132,13 +252,12 @@ static struct proto mptcp_prot = { .init = mptcp_init_sock, .close = mptcp_close, .accept = inet_csk_accept, - .connect = mptcp_connect, .shutdown = tcp_shutdown, .sendmsg = mptcp_sendmsg, .recvmsg = mptcp_recvmsg, .hash = inet_hash, .unhash = inet_unhash, - .get_port = inet_csk_get_port, + .get_port = mptcp_get_port, .obj_size = sizeof(struct mptcp_sock), .no_autobind = true, }; diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 543d4d5d8985..bd66e7415515 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -40,19 +40,47 @@ struct mptcp_sock { /* inet_connection_sock must be the first member */ struct inet_connection_sock sk; + u64 local_key; + u64 remote_key; + struct list_head conn_list; struct socket *subflow; /* outgoing connect/listener/!mp_capable */ }; +#define mptcp_for_each_subflow(__msk, __subflow) \ + list_for_each_entry(__subflow, &((__msk)->conn_list), node) + static inline struct mptcp_sock *mptcp_sk(const struct sock *sk) { return (struct mptcp_sock *)sk; } +struct mptcp_subflow_request_sock { + struct tcp_request_sock sk; + u8 mp_capable : 1, + mp_join : 1, + backup : 1; + u64 local_key; + u64 remote_key; +}; + +static inline struct mptcp_subflow_request_sock * +mptcp_subflow_rsk(const struct request_sock *rsk) +{ + return (struct mptcp_subflow_request_sock *)rsk; +} + /* MPTCP subflow context */ struct mptcp_subflow_context { - u32 request_mptcp : 1; /* send MP_CAPABLE */ + struct list_head node;/* conn_list of subflows */ + u64 local_key; + u64 remote_key; + u32 request_mptcp : 1, /* send MP_CAPABLE */ + mp_capable : 1, /* remote is MPTCP capable */ + fourth_ack : 1, /* send initial DSS */ + conn_finished : 1; struct sock *tcp_sock; /* tcp sk backpointer */ struct sock *conn; /* parent mptcp_sock */ + const struct inet_connection_sock_af_ops *icsk_af_ops; struct rcu_head rcu; }; @@ -74,4 +102,14 @@ mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow) void mptcp_subflow_init(void); int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock); +extern const struct inet_connection_sock_af_ops ipv4_specific; +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +extern const struct inet_connection_sock_af_ops ipv6_specific; +#endif + +void mptcp_get_options(const struct sk_buff *skb, + struct tcp_options_received *opt_rx); + +void mptcp_finish_connect(struct sock *sk); + #endif /* __MPTCP_PROTOCOL_H */ diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index bf8139353653..df3192305967 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -12,9 +12,188 @@ #include #include #include +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +#include +#endif #include #include "protocol.h" +static void subflow_init_req(struct request_sock *req, + const struct sock *sk_listener, + struct sk_buff *skb) +{ + struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk_listener); + struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); + struct tcp_options_received rx_opt; + + pr_debug("subflow_req=%p, listener=%p", subflow_req, listener); + + memset(&rx_opt.mptcp, 0, sizeof(rx_opt.mptcp)); + mptcp_get_options(skb, &rx_opt); + + subflow_req->mp_capable = 0; + +#ifdef CONFIG_TCP_MD5SIG + /* no MPTCP if MD5SIG is enabled on this socket or we may run out of + * TCP option space. + */ + if (rcu_access_pointer(tcp_sk(sk_listener)->md5sig_info)) + return; +#endif + + if (rx_opt.mptcp.mp_capable && listener->request_mptcp) { + subflow_req->mp_capable = 1; + subflow_req->remote_key = rx_opt.mptcp.sndr_key; + } +} + +static void subflow_v4_init_req(struct request_sock *req, + const struct sock *sk_listener, + struct sk_buff *skb) +{ + tcp_rsk(req)->is_mptcp = 1; + + tcp_request_sock_ipv4_ops.init_req(req, sk_listener, skb); + + subflow_init_req(req, sk_listener, skb); +} + +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +static void subflow_v6_init_req(struct request_sock *req, + const struct sock *sk_listener, + struct sk_buff *skb) +{ + tcp_rsk(req)->is_mptcp = 1; + + tcp_request_sock_ipv6_ops.init_req(req, sk_listener, skb); + + subflow_init_req(req, sk_listener, skb); +} +#endif + +static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + + subflow->icsk_af_ops->sk_rx_dst_set(sk, skb); + + if (subflow->conn && !subflow->conn_finished) { + pr_debug("subflow=%p, remote_key=%llu", mptcp_subflow_ctx(sk), + subflow->remote_key); + mptcp_finish_connect(sk); + subflow->conn_finished = 1; + } +} + +static struct request_sock_ops subflow_request_sock_ops; +static struct tcp_request_sock_ops subflow_request_sock_ipv4_ops; + +static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + + pr_debug("subflow=%p", subflow); + + /* Never answer to SYNs sent to broadcast or multicast */ + if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) + goto drop; + + return tcp_conn_request(&subflow_request_sock_ops, + &subflow_request_sock_ipv4_ops, + sk, skb); +drop: + tcp_listendrop(sk); + return 0; +} + +#if IS_ENABLED(CONFIG_MPTCP_IPV6) +static struct tcp_request_sock_ops subflow_request_sock_ipv6_ops; +static struct inet_connection_sock_af_ops subflow_v6_specific; +static struct inet_connection_sock_af_ops subflow_v6m_specific; + +static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + + pr_debug("subflow=%p", subflow); + + if (skb->protocol == htons(ETH_P_IP)) + return subflow_v4_conn_request(sk, skb); + + if (!ipv6_unicast_destination(skb)) + goto drop; + + return tcp_conn_request(&subflow_request_sock_ops, + &subflow_request_sock_ipv6_ops, sk, skb); + +drop: + tcp_listendrop(sk); + return 0; /* don't send reset */ +} +#endif + +static struct sock *subflow_syn_recv_sock(const struct sock *sk, + struct sk_buff *skb, + struct request_sock *req, + struct dst_entry *dst, + struct request_sock *req_unhash, + bool *own_req) +{ + struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); + struct sock *child; + + pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn); + + /* if the sk is MP_CAPABLE, we already received the client key */ + + child = listener->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, + req_unhash, own_req); + + if (child && *own_req) { + if (!mptcp_subflow_ctx(child)) { + pr_debug("Closing child socket"); + inet_sk_set_state(child, TCP_CLOSE); + sock_set_flag(child, SOCK_DEAD); + inet_csk_destroy_sock(child); + child = NULL; + } + } + + return child; +} + +static struct inet_connection_sock_af_ops subflow_specific; + +static struct inet_connection_sock_af_ops * +subflow_default_af_ops(struct sock *sk) +{ +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + if (sk->sk_family == AF_INET6) + return &subflow_v6_specific; +#endif + return &subflow_specific; +} + +void mptcp_handle_ipv6_mapped(struct sock *sk, bool mapped) +{ +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct inet_connection_sock *icsk = inet_csk(sk); + struct inet_connection_sock_af_ops *target; + + target = mapped ? &subflow_v6m_specific : subflow_default_af_ops(sk); + + pr_debug("subflow=%p family=%d ops=%p target=%p mapped=%d", + subflow, sk->sk_family, icsk->icsk_af_ops, target, mapped); + + if (likely(icsk->icsk_af_ops == target)) + return; + + subflow->icsk_af_ops = icsk->icsk_af_ops; + icsk->icsk_af_ops = target; +#endif +} + int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock) { struct mptcp_subflow_context *subflow; @@ -22,7 +201,8 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock) struct socket *sf; int err; - err = sock_create_kern(net, PF_INET, SOCK_STREAM, IPPROTO_TCP, &sf); + err = sock_create_kern(net, sk->sk_family, SOCK_STREAM, IPPROTO_TCP, + &sf); if (err) return err; @@ -60,6 +240,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk, return NULL; rcu_assign_pointer(icsk->icsk_ulp_data, ctx); + INIT_LIST_HEAD(&ctx->node); pr_debug("subflow=%p", ctx); @@ -70,6 +251,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk, static int subflow_ulp_init(struct sock *sk) { + struct inet_connection_sock *icsk = inet_csk(sk); struct mptcp_subflow_context *ctx; struct tcp_sock *tp = tcp_sk(sk); int err = 0; @@ -91,6 +273,8 @@ static int subflow_ulp_init(struct sock *sk) pr_debug("subflow=%p, family=%d", ctx, sk->sk_family); tp->is_mptcp = 1; + ctx->icsk_af_ops = icsk->icsk_af_ops; + icsk->icsk_af_ops = subflow_default_af_ops(sk); out: return err; } @@ -105,15 +289,97 @@ static void subflow_ulp_release(struct sock *sk) kfree_rcu(ctx, rcu); } +static void subflow_ulp_fallback(struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + + icsk->icsk_ulp_ops = NULL; + rcu_assign_pointer(icsk->icsk_ulp_data, NULL); + tcp_sk(sk)->is_mptcp = 0; +} + +static void subflow_ulp_clone(const struct request_sock *req, + struct sock *newsk, + const gfp_t priority) +{ + struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); + struct mptcp_subflow_context *old_ctx = mptcp_subflow_ctx(newsk); + struct mptcp_subflow_context *new_ctx; + + if (!subflow_req->mp_capable) { + subflow_ulp_fallback(newsk); + return; + } + + new_ctx = subflow_create_ctx(newsk, priority); + if (new_ctx == NULL) { + subflow_ulp_fallback(newsk); + return; + } + + new_ctx->conn_finished = 1; + new_ctx->icsk_af_ops = old_ctx->icsk_af_ops; + new_ctx->mp_capable = 1; + new_ctx->fourth_ack = 1; + new_ctx->remote_key = subflow_req->remote_key; + new_ctx->local_key = subflow_req->local_key; +} + static struct tcp_ulp_ops subflow_ulp_ops __read_mostly = { .name = "mptcp", .owner = THIS_MODULE, .init = subflow_ulp_init, .release = subflow_ulp_release, + .clone = subflow_ulp_clone, }; +static int subflow_ops_init(struct request_sock_ops *subflow_ops) +{ + subflow_ops->obj_size = sizeof(struct mptcp_subflow_request_sock); + subflow_ops->slab_name = "request_sock_subflow"; + + subflow_ops->slab = kmem_cache_create(subflow_ops->slab_name, + subflow_ops->obj_size, 0, + SLAB_ACCOUNT | + SLAB_TYPESAFE_BY_RCU, + NULL); + if (!subflow_ops->slab) + return -ENOMEM; + + return 0; +} + void mptcp_subflow_init(void) { + subflow_request_sock_ops = tcp_request_sock_ops; + if (subflow_ops_init(&subflow_request_sock_ops) != 0) + panic("MPTCP: failed to init subflow request sock ops\n"); + + subflow_request_sock_ipv4_ops = tcp_request_sock_ipv4_ops; + subflow_request_sock_ipv4_ops.init_req = subflow_v4_init_req; + + subflow_specific = ipv4_specific; + subflow_specific.conn_request = subflow_v4_conn_request; + subflow_specific.syn_recv_sock = subflow_syn_recv_sock; + subflow_specific.sk_rx_dst_set = subflow_finish_connect; + +#if IS_ENABLED(CONFIG_MPTCP_IPV6) + subflow_request_sock_ipv6_ops = tcp_request_sock_ipv6_ops; + subflow_request_sock_ipv6_ops.init_req = subflow_v6_init_req; + + subflow_v6_specific = ipv6_specific; + subflow_v6_specific.conn_request = subflow_v6_conn_request; + subflow_v6_specific.syn_recv_sock = subflow_syn_recv_sock; + subflow_v6_specific.sk_rx_dst_set = subflow_finish_connect; + + subflow_v6m_specific = subflow_v6_specific; + subflow_v6m_specific.queue_xmit = ipv4_specific.queue_xmit; + subflow_v6m_specific.send_check = ipv4_specific.send_check; + subflow_v6m_specific.net_header_len = ipv4_specific.net_header_len; + subflow_v6m_specific.mtu_reduced = ipv4_specific.mtu_reduced; + subflow_v6m_specific.net_frag_header_len = 0; +#endif + if (tcp_register_ulp(&subflow_ulp_ops) != 0) panic("MPTCP: failed to register subflows to ULP\n"); } -- cgit v1.2.3-71-gd317 From 6d0060f600adfddaa43fefb96b6b12643331961e Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Tue, 21 Jan 2020 16:56:23 -0800 Subject: mptcp: Write MPTCP DSS headers to outgoing data packets Per-packet metadata required to write the MPTCP DSS option is written to the skb_ext area. One write to the socket may contain more than one packet of data, which is copied to page fragments and mapped in to MPTCP DSS segments with size determined by the available page fragments and the maximum mapping length allowed by the MPTCP specification. If do_tcp_sendpages() splits a DSS segment in to multiple skbs, that's ok - the later skbs can either have duplicated DSS mapping information or none at all, and the receiver can handle that. The current implementation uses the subflow frag cache and tcp sendpages to avoid excessive code duplication. More work is required to ensure that it works correctly under memory pressure and to support MPTCP-level retransmissions. The MPTCP DSS checksum is not yet implemented. Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Peter Krystad Signed-off-by: Peter Krystad Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Signed-off-by: Mat Martineau Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/net/mptcp.h | 1 + net/mptcp/options.c | 155 +++++++++++++++++++++++++++++++++++++++++++++++++-- net/mptcp/protocol.c | 116 +++++++++++++++++++++++++++++++++++++- net/mptcp/protocol.h | 20 +++++++ 4 files changed, 286 insertions(+), 6 deletions(-) (limited to 'include') diff --git a/include/net/mptcp.h b/include/net/mptcp.h index eabc57c3fde4..06dcc665135e 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -32,6 +32,7 @@ struct mptcp_out_options { u16 suboptions; u64 sndr_key; u64 rcvr_key; + struct mptcp_ext ext_copy; #endif }; diff --git a/net/mptcp/options.c b/net/mptcp/options.c index 52ff2301b68b..5ea127bc7f01 100644 --- a/net/mptcp/options.c +++ b/net/mptcp/options.c @@ -134,13 +134,13 @@ void mptcp_rcv_synsent(struct sock *sk) } } -bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, - unsigned int *size, unsigned int remaining, - struct mptcp_out_options *opts) +static bool mptcp_established_options_mp(struct sock *sk, unsigned int *size, + unsigned int remaining, + struct mptcp_out_options *opts) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); - if (subflow->mp_capable && !subflow->fourth_ack) { + if (!subflow->fourth_ack) { opts->suboptions = OPTION_MPTCP_MPC_ACK; opts->sndr_key = subflow->local_key; opts->rcvr_key = subflow->remote_key; @@ -153,6 +153,112 @@ bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, return false; } +static void mptcp_write_data_fin(struct mptcp_subflow_context *subflow, + struct mptcp_ext *ext) +{ + ext->data_fin = 1; + + if (!ext->use_map) { + /* RFC6824 requires a DSS mapping with specific values + * if DATA_FIN is set but no data payload is mapped + */ + ext->use_map = 1; + ext->dsn64 = 1; + ext->data_seq = mptcp_sk(subflow->conn)->write_seq; + ext->subflow_seq = 0; + ext->data_len = 1; + } else { + /* If there's an existing DSS mapping, DATA_FIN consumes + * 1 additional byte of mapping space. + */ + ext->data_len++; + } +} + +static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb, + unsigned int *size, + unsigned int remaining, + struct mptcp_out_options *opts) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + unsigned int dss_size = 0; + struct mptcp_ext *mpext; + struct mptcp_sock *msk; + unsigned int ack_size; + u8 tcp_fin; + + if (skb) { + mpext = mptcp_get_ext(skb); + tcp_fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN; + } else { + mpext = NULL; + tcp_fin = 0; + } + + if (!skb || (mpext && mpext->use_map) || tcp_fin) { + unsigned int map_size; + + map_size = TCPOLEN_MPTCP_DSS_BASE + TCPOLEN_MPTCP_DSS_MAP64; + + remaining -= map_size; + dss_size = map_size; + if (mpext) + opts->ext_copy = *mpext; + + if (skb && tcp_fin && + subflow->conn->sk_state != TCP_ESTABLISHED) + mptcp_write_data_fin(subflow, &opts->ext_copy); + } + + ack_size = TCPOLEN_MPTCP_DSS_ACK64; + + /* Add kind/length/subtype/flag overhead if mapping is not populated */ + if (dss_size == 0) + ack_size += TCPOLEN_MPTCP_DSS_BASE; + + dss_size += ack_size; + + msk = mptcp_sk(mptcp_subflow_ctx(sk)->conn); + if (msk) { + opts->ext_copy.data_ack = msk->ack_seq; + } else { + mptcp_crypto_key_sha(mptcp_subflow_ctx(sk)->remote_key, + NULL, &opts->ext_copy.data_ack); + opts->ext_copy.data_ack++; + } + + opts->ext_copy.ack64 = 1; + opts->ext_copy.use_ack = 1; + + *size = ALIGN(dss_size, 4); + return true; +} + +bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, + unsigned int *size, unsigned int remaining, + struct mptcp_out_options *opts) +{ + unsigned int opt_size = 0; + bool ret = false; + + if (mptcp_established_options_mp(sk, &opt_size, remaining, opts)) + ret = true; + else if (mptcp_established_options_dss(sk, skb, &opt_size, remaining, + opts)) + ret = true; + + /* we reserved enough space for the above options, and exceeding the + * TCP option space would be fatal + */ + if (WARN_ON_ONCE(opt_size > remaining)) + return false; + + *size += opt_size; + remaining -= opt_size; + + return ret; +} + bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, struct mptcp_out_options *opts) { @@ -194,4 +300,45 @@ void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) ptr += 2; } } + + if (opts->ext_copy.use_ack || opts->ext_copy.use_map) { + struct mptcp_ext *mpext = &opts->ext_copy; + u8 len = TCPOLEN_MPTCP_DSS_BASE; + u8 flags = 0; + + if (mpext->use_ack) { + len += TCPOLEN_MPTCP_DSS_ACK64; + flags = MPTCP_DSS_HAS_ACK | MPTCP_DSS_ACK64; + } + + if (mpext->use_map) { + len += TCPOLEN_MPTCP_DSS_MAP64; + + /* Use only 64-bit mapping flags for now, add + * support for optional 32-bit mappings later. + */ + flags |= MPTCP_DSS_HAS_MAP | MPTCP_DSS_DSN64; + if (mpext->data_fin) + flags |= MPTCP_DSS_DATA_FIN; + } + + *ptr++ = htonl((TCPOPT_MPTCP << 24) | + (len << 16) | + (MPTCPOPT_DSS << 12) | + (flags)); + + if (mpext->use_ack) { + put_unaligned_be64(mpext->data_ack, ptr); + ptr += 2; + } + + if (mpext->use_map) { + put_unaligned_be64(mpext->data_seq, ptr); + ptr += 2; + put_unaligned_be32(mpext->subflow_seq, ptr); + ptr += 1; + put_unaligned_be32(mpext->data_len << 16 | + TCPOPT_NOP << 8 | TCPOPT_NOP, ptr); + } + } } diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 875ca48de4b2..8cf49193b1c0 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -97,12 +97,93 @@ static struct sock *mptcp_subflow_get(const struct mptcp_sock *msk) return NULL; } +static bool mptcp_ext_cache_refill(struct mptcp_sock *msk) +{ + if (!msk->cached_ext) + msk->cached_ext = __skb_ext_alloc(); + + return !!msk->cached_ext; +} + +static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk, + struct msghdr *msg, long *timeo) +{ + int mss_now = 0, size_goal = 0, ret = 0; + struct mptcp_sock *msk = mptcp_sk(sk); + struct mptcp_ext *mpext = NULL; + struct page_frag *pfrag; + struct sk_buff *skb; + size_t psize; + + /* use the mptcp page cache so that we can easily move the data + * from one substream to another, but do per subflow memory accounting + */ + pfrag = sk_page_frag(sk); + while (!sk_page_frag_refill(ssk, pfrag) || + !mptcp_ext_cache_refill(msk)) { + ret = sk_stream_wait_memory(ssk, timeo); + if (ret) + return ret; + } + + /* compute copy limit */ + mss_now = tcp_send_mss(ssk, &size_goal, msg->msg_flags); + psize = min_t(int, pfrag->size - pfrag->offset, size_goal); + + pr_debug("left=%zu", msg_data_left(msg)); + psize = copy_page_from_iter(pfrag->page, pfrag->offset, + min_t(size_t, msg_data_left(msg), psize), + &msg->msg_iter); + pr_debug("left=%zu", msg_data_left(msg)); + if (!psize) + return -EINVAL; + + /* Mark the end of the previous write so the beginning of the + * next write (with its own mptcp skb extension data) is not + * collapsed. + */ + skb = tcp_write_queue_tail(ssk); + if (skb) + TCP_SKB_CB(skb)->eor = 1; + + ret = do_tcp_sendpages(ssk, pfrag->page, pfrag->offset, psize, + msg->msg_flags | MSG_SENDPAGE_NOTLAST); + if (ret <= 0) + return ret; + if (unlikely(ret < psize)) + iov_iter_revert(&msg->msg_iter, psize - ret); + + skb = tcp_write_queue_tail(ssk); + mpext = __skb_ext_set(skb, SKB_EXT_MPTCP, msk->cached_ext); + msk->cached_ext = NULL; + + memset(mpext, 0, sizeof(*mpext)); + mpext->data_seq = msk->write_seq; + mpext->subflow_seq = mptcp_subflow_ctx(ssk)->rel_write_seq; + mpext->data_len = ret; + mpext->use_map = 1; + mpext->dsn64 = 1; + + pr_debug("data_seq=%llu subflow_seq=%u data_len=%u dsn64=%d", + mpext->data_seq, mpext->subflow_seq, mpext->data_len, + mpext->dsn64); + + pfrag->offset += ret; + msk->write_seq += ret; + mptcp_subflow_ctx(ssk)->rel_write_seq += ret; + + tcp_push(ssk, msg->msg_flags, mss_now, tcp_sk(ssk)->nonagle, size_goal); + return ret; +} + static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); struct socket *ssock; + size_t copied = 0; struct sock *ssk; - int ret; + int ret = 0; + long timeo; if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) return -EOPNOTSUPP; @@ -116,14 +197,29 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) return ret; } + timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); + ssk = mptcp_subflow_get(msk); if (!ssk) { release_sock(sk); return -ENOTCONN; } - ret = sock_sendmsg(ssk->sk_socket, msg); + pr_debug("conn_list->subflow=%p", ssk); + lock_sock(ssk); + while (msg_data_left(msg)) { + ret = mptcp_sendmsg_frag(sk, ssk, msg, &timeo); + if (ret < 0) + break; + + copied += ret; + } + + if (copied > 0) + ret = copied; + + release_sock(ssk); release_sock(sk); return ret; } @@ -235,6 +331,8 @@ static void mptcp_close(struct sock *sk, long timeout) __mptcp_close_ssk(sk, ssk, subflow, timeout); } + if (msk->cached_ext) + __skb_ext_put(msk->cached_ext); release_sock(sk); sk_common_release(sk); } @@ -286,6 +384,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, struct mptcp_subflow_context *subflow; struct sock *new_mptcp_sock; struct sock *ssk = newsk; + u64 ack_seq; subflow = mptcp_subflow_ctx(newsk); lock_sock(sk); @@ -310,6 +409,12 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, msk->subflow = NULL; mptcp_token_update_accept(newsk, new_mptcp_sock); + + mptcp_crypto_key_sha(msk->remote_key, NULL, &ack_seq); + msk->write_seq = subflow->idsn + 1; + ack_seq++; + msk->ack_seq = ack_seq; + subflow->rel_write_seq = 1; newsk = new_mptcp_sock; mptcp_copy_inaddrs(newsk, ssk); list_add(&subflow->node, &msk->conn_list); @@ -404,6 +509,7 @@ void mptcp_finish_connect(struct sock *ssk) struct mptcp_subflow_context *subflow; struct mptcp_sock *msk; struct sock *sk; + u64 ack_seq; subflow = mptcp_subflow_ctx(ssk); @@ -413,12 +519,18 @@ void mptcp_finish_connect(struct sock *ssk) sk = subflow->conn; msk = mptcp_sk(sk); + mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq); + ack_seq++; + subflow->rel_write_seq = 1; + /* the socket is not connected yet, no msk/subflow ops can access/race * accessing the field below */ WRITE_ONCE(msk->remote_key, subflow->remote_key); WRITE_ONCE(msk->local_key, subflow->local_key); WRITE_ONCE(msk->token, subflow->token); + WRITE_ONCE(msk->write_seq, subflow->idsn + 1); + WRITE_ONCE(msk->ack_seq, ack_seq); } static void mptcp_sock_graft(struct sock *sk, struct socket *parent) diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 5f43fa0275c0..384ec4804198 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -32,6 +32,10 @@ #define TCPOLEN_MPTCP_MPC_SYN 12 #define TCPOLEN_MPTCP_MPC_SYNACK 12 #define TCPOLEN_MPTCP_MPC_ACK 20 +#define TCPOLEN_MPTCP_DSS_BASE 4 +#define TCPOLEN_MPTCP_DSS_ACK64 8 +#define TCPOLEN_MPTCP_DSS_MAP64 14 +#define TCPOLEN_MPTCP_DSS_CHECKSUM 2 /* MPTCP MP_CAPABLE flags */ #define MPTCP_VERSION_MASK (0x0F) @@ -40,14 +44,24 @@ #define MPTCP_CAP_HMAC_SHA1 BIT(0) #define MPTCP_CAP_FLAG_MASK (0x3F) +/* MPTCP DSS flags */ +#define MPTCP_DSS_DATA_FIN BIT(4) +#define MPTCP_DSS_DSN64 BIT(3) +#define MPTCP_DSS_HAS_MAP BIT(2) +#define MPTCP_DSS_ACK64 BIT(1) +#define MPTCP_DSS_HAS_ACK BIT(0) + /* MPTCP connection sock */ struct mptcp_sock { /* inet_connection_sock must be the first member */ struct inet_connection_sock sk; u64 local_key; u64 remote_key; + u64 write_seq; + u64 ack_seq; u32 token; struct list_head conn_list; + struct skb_ext *cached_ext; /* for the next sendmsg */ struct socket *subflow; /* outgoing connect/listener/!mp_capable */ }; @@ -83,6 +97,7 @@ struct mptcp_subflow_context { u64 remote_key; u64 idsn; u32 token; + u32 rel_write_seq; u32 request_mptcp : 1, /* send MP_CAPABLE */ mp_capable : 1, /* remote is MPTCP capable */ fourth_ack : 1, /* send initial DSS */ @@ -144,4 +159,9 @@ static inline void mptcp_crypto_key_gen_sha(u64 *key, u32 *token, u64 *idsn) void mptcp_crypto_hmac_sha(u64 key1, u64 key2, u32 nonce1, u32 nonce2, u32 *hash_out); +static inline struct mptcp_ext *mptcp_get_ext(struct sk_buff *skb) +{ + return (struct mptcp_ext *)skb_ext_find(skb, SKB_EXT_MPTCP); +} + #endif /* __MPTCP_PROTOCOL_H */ -- cgit v1.2.3-71-gd317 From 648ef4b88673dadb8463bf0d4b10fbf33d55def8 Mon Sep 17 00:00:00 2001 From: Mat Martineau Date: Tue, 21 Jan 2020 16:56:24 -0800 Subject: mptcp: Implement MPTCP receive path Parses incoming DSS options and populates outgoing MPTCP ACK fields. MPTCP fields are parsed from the TCP option header and placed in an skb extension, allowing the upper MPTCP layer to access MPTCP options after the skb has gone through the TCP stack. The subflow implements its own data_ready() ops, which ensures that the pending data is in sequence - according to MPTCP seq number - dropping out-of-seq skbs. The DATA_READY bit flag is set if this is the case. This allows the MPTCP socket layer to determine if more data is available without having to consult the individual subflows. It additionally validates the current mapping and propagates EoF events to the connection socket. Co-developed-by: Paolo Abeni Signed-off-by: Paolo Abeni Co-developed-by: Peter Krystad Signed-off-by: Peter Krystad Co-developed-by: Davide Caratti Signed-off-by: Davide Caratti Co-developed-by: Matthieu Baerts Signed-off-by: Matthieu Baerts Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Signed-off-by: Mat Martineau Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/linux/tcp.h | 10 ++ include/net/mptcp.h | 8 ++ net/ipv4/tcp_input.c | 8 +- net/mptcp/options.c | 107 ++++++++++++++ net/mptcp/protocol.c | 34 +++++ net/mptcp/protocol.h | 64 ++++++++- net/mptcp/subflow.c | 383 ++++++++++++++++++++++++++++++++++++++++++++++++++- 7 files changed, 609 insertions(+), 5 deletions(-) (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index e9ee06d887fa..0d00dad4b85d 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -82,9 +82,19 @@ struct tcp_sack_block { struct mptcp_options_received { u64 sndr_key; u64 rcvr_key; + u64 data_ack; + u64 data_seq; + u32 subflow_seq; + u16 data_len; u8 mp_capable : 1, mp_join : 1, dss : 1; + u8 use_map:1, + dsn64:1, + data_fin:1, + use_ack:1, + ack64:1, + __unused:3; }; #endif diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 06dcc665135e..8619c1fca741 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -60,6 +60,8 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, unsigned int *size, unsigned int remaining, struct mptcp_out_options *opts); +void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb, + struct tcp_options_received *opt_rx); void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts); @@ -150,6 +152,12 @@ static inline bool mptcp_established_options(struct sock *sk, return false; } +static inline void mptcp_incoming_options(struct sock *sk, + struct sk_buff *skb, + struct tcp_options_received *opt_rx) +{ +} + static inline void mptcp_skb_ext_move(struct sk_buff *to, const struct sk_buff *from) { diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5165c8de47ee..28d31f2c1422 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4770,6 +4770,9 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) bool fragstolen; int eaten; + if (sk_is_mptcp(sk)) + mptcp_incoming_options(sk, skb, &tp->rx_opt); + if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) { __kfree_skb(skb); return; @@ -6346,8 +6349,11 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) case TCP_CLOSE_WAIT: case TCP_CLOSING: case TCP_LAST_ACK: - if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) + if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) { + if (sk_is_mptcp(sk)) + mptcp_incoming_options(sk, skb, &tp->rx_opt); break; + } /* fall through */ case TCP_FIN_WAIT1: case TCP_FIN_WAIT2: diff --git a/net/mptcp/options.c b/net/mptcp/options.c index 5ea127bc7f01..1fa8496f3551 100644 --- a/net/mptcp/options.c +++ b/net/mptcp/options.c @@ -14,6 +14,7 @@ void mptcp_parse_option(const unsigned char *ptr, int opsize, { struct mptcp_options_received *mp_opt = &opt_rx->mptcp; u8 subtype = *ptr >> 4; + int expected_opsize; u8 version; u8 flags; @@ -64,7 +65,79 @@ void mptcp_parse_option(const unsigned char *ptr, int opsize, case MPTCPOPT_DSS: pr_debug("DSS"); + ptr++; + + flags = (*ptr++) & MPTCP_DSS_FLAG_MASK; + mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0; + mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0; + mp_opt->use_map = (flags & MPTCP_DSS_HAS_MAP) != 0; + mp_opt->ack64 = (flags & MPTCP_DSS_ACK64) != 0; + mp_opt->use_ack = (flags & MPTCP_DSS_HAS_ACK); + + pr_debug("data_fin=%d dsn64=%d use_map=%d ack64=%d use_ack=%d", + mp_opt->data_fin, mp_opt->dsn64, + mp_opt->use_map, mp_opt->ack64, + mp_opt->use_ack); + + expected_opsize = TCPOLEN_MPTCP_DSS_BASE; + + if (mp_opt->use_ack) { + if (mp_opt->ack64) + expected_opsize += TCPOLEN_MPTCP_DSS_ACK64; + else + expected_opsize += TCPOLEN_MPTCP_DSS_ACK32; + } + + if (mp_opt->use_map) { + if (mp_opt->dsn64) + expected_opsize += TCPOLEN_MPTCP_DSS_MAP64; + else + expected_opsize += TCPOLEN_MPTCP_DSS_MAP32; + } + + /* RFC 6824, Section 3.3: + * If a checksum is present, but its use had + * not been negotiated in the MP_CAPABLE handshake, + * the checksum field MUST be ignored. + */ + if (opsize != expected_opsize && + opsize != expected_opsize + TCPOLEN_MPTCP_DSS_CHECKSUM) + break; + mp_opt->dss = 1; + + if (mp_opt->use_ack) { + if (mp_opt->ack64) { + mp_opt->data_ack = get_unaligned_be64(ptr); + ptr += 8; + } else { + mp_opt->data_ack = get_unaligned_be32(ptr); + ptr += 4; + } + + pr_debug("data_ack=%llu", mp_opt->data_ack); + } + + if (mp_opt->use_map) { + if (mp_opt->dsn64) { + mp_opt->data_seq = get_unaligned_be64(ptr); + ptr += 8; + } else { + mp_opt->data_seq = get_unaligned_be32(ptr); + ptr += 4; + } + + mp_opt->subflow_seq = get_unaligned_be32(ptr); + ptr += 4; + + mp_opt->data_len = get_unaligned_be16(ptr); + ptr += 2; + + pr_debug("data_seq=%llu subflow_seq=%u data_len=%u", + mp_opt->data_seq, mp_opt->subflow_seq, + mp_opt->data_len); + } + break; default: @@ -275,6 +348,40 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, return false; } +void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb, + struct tcp_options_received *opt_rx) +{ + struct mptcp_options_received *mp_opt; + struct mptcp_ext *mpext; + + mp_opt = &opt_rx->mptcp; + + if (!mp_opt->dss) + return; + + mpext = skb_ext_add(skb, SKB_EXT_MPTCP); + if (!mpext) + return; + + memset(mpext, 0, sizeof(*mpext)); + + if (mp_opt->use_map) { + mpext->data_seq = mp_opt->data_seq; + mpext->subflow_seq = mp_opt->subflow_seq; + mpext->data_len = mp_opt->data_len; + mpext->use_map = 1; + mpext->dsn64 = mp_opt->dsn64; + } + + if (mp_opt->use_ack) { + mpext->data_ack = mp_opt->data_ack; + mpext->use_ack = 1; + mpext->ack64 = mp_opt->ack64; + } + + mpext->data_fin = mp_opt->data_fin; +} + void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) { if ((OPTION_MPTCP_MPC_SYN | diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 8cf49193b1c0..71250149180b 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -224,6 +224,33 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) return ret; } +int mptcp_read_actor(read_descriptor_t *desc, struct sk_buff *skb, + unsigned int offset, size_t len) +{ + struct mptcp_read_arg *arg = desc->arg.data; + size_t copy_len; + + copy_len = min(desc->count, len); + + if (likely(arg->msg)) { + int err; + + err = skb_copy_datagram_msg(skb, offset, arg->msg, copy_len); + if (err) { + pr_debug("error path"); + desc->error = err; + return err; + } + } else { + pr_debug("Flushing skb payload"); + } + + desc->count -= copy_len; + + pr_debug("consumed %zu bytes, %zu left", copy_len, desc->count); + return copy_len; +} + static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock, int flags, int *addr_len) { @@ -414,7 +441,10 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, msk->write_seq = subflow->idsn + 1; ack_seq++; msk->ack_seq = ack_seq; + subflow->map_seq = ack_seq; + subflow->map_subflow_seq = 1; subflow->rel_write_seq = 1; + subflow->tcp_sock = ssk; newsk = new_mptcp_sock; mptcp_copy_inaddrs(newsk, ssk); list_add(&subflow->node, &msk->conn_list); @@ -519,8 +549,12 @@ void mptcp_finish_connect(struct sock *ssk) sk = subflow->conn; msk = mptcp_sk(sk); + pr_debug("msk=%p, token=%u", sk, subflow->token); + mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq); ack_seq++; + subflow->map_seq = ack_seq; + subflow->map_subflow_seq = 1; subflow->rel_write_seq = 1; /* the socket is not connected yet, no msk/subflow ops can access/race diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 384ec4804198..c6d8217e24d4 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -33,7 +33,9 @@ #define TCPOLEN_MPTCP_MPC_SYNACK 12 #define TCPOLEN_MPTCP_MPC_ACK 20 #define TCPOLEN_MPTCP_DSS_BASE 4 +#define TCPOLEN_MPTCP_DSS_ACK32 4 #define TCPOLEN_MPTCP_DSS_ACK64 8 +#define TCPOLEN_MPTCP_DSS_MAP32 10 #define TCPOLEN_MPTCP_DSS_MAP64 14 #define TCPOLEN_MPTCP_DSS_CHECKSUM 2 @@ -50,6 +52,10 @@ #define MPTCP_DSS_HAS_MAP BIT(2) #define MPTCP_DSS_ACK64 BIT(1) #define MPTCP_DSS_HAS_ACK BIT(0) +#define MPTCP_DSS_FLAG_MASK (0x1F) + +/* MPTCP socket flags */ +#define MPTCP_DATA_READY BIT(0) /* MPTCP connection sock */ struct mptcp_sock { @@ -60,6 +66,7 @@ struct mptcp_sock { u64 write_seq; u64 ack_seq; u32 token; + unsigned long flags; struct list_head conn_list; struct skb_ext *cached_ext; /* for the next sendmsg */ struct socket *subflow; /* outgoing connect/listener/!mp_capable */ @@ -82,6 +89,7 @@ struct mptcp_subflow_request_sock { u64 remote_key; u64 idsn; u32 token; + u32 ssn_offset; }; static inline struct mptcp_subflow_request_sock * @@ -96,15 +104,27 @@ struct mptcp_subflow_context { u64 local_key; u64 remote_key; u64 idsn; + u64 map_seq; u32 token; u32 rel_write_seq; + u32 map_subflow_seq; + u32 ssn_offset; + u32 map_data_len; u32 request_mptcp : 1, /* send MP_CAPABLE */ mp_capable : 1, /* remote is MPTCP capable */ fourth_ack : 1, /* send initial DSS */ - conn_finished : 1; + conn_finished : 1, + map_valid : 1, + data_avail : 1, + rx_eof : 1; + struct sock *tcp_sock; /* tcp sk backpointer */ struct sock *conn; /* parent mptcp_sock */ const struct inet_connection_sock_af_ops *icsk_af_ops; + void (*tcp_data_ready)(struct sock *sk); + void (*tcp_state_change)(struct sock *sk); + void (*tcp_write_space)(struct sock *sk); + struct rcu_head rcu; }; @@ -123,14 +143,49 @@ mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow) return subflow->tcp_sock; } +static inline u64 +mptcp_subflow_get_map_offset(const struct mptcp_subflow_context *subflow) +{ + return tcp_sk(mptcp_subflow_tcp_sock(subflow))->copied_seq - + subflow->ssn_offset - + subflow->map_subflow_seq; +} + +static inline u64 +mptcp_subflow_get_mapped_dsn(const struct mptcp_subflow_context *subflow) +{ + return subflow->map_seq + mptcp_subflow_get_map_offset(subflow); +} + +int mptcp_is_enabled(struct net *net); +bool mptcp_subflow_data_available(struct sock *sk); void mptcp_subflow_init(void); int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock); +static inline void mptcp_subflow_tcp_fallback(struct sock *sk, + struct mptcp_subflow_context *ctx) +{ + sk->sk_data_ready = ctx->tcp_data_ready; + sk->sk_state_change = ctx->tcp_state_change; + sk->sk_write_space = ctx->tcp_write_space; + + inet_csk(sk)->icsk_af_ops = ctx->icsk_af_ops; +} + extern const struct inet_connection_sock_af_ops ipv4_specific; #if IS_ENABLED(CONFIG_MPTCP_IPV6) extern const struct inet_connection_sock_af_ops ipv6_specific; #endif +void mptcp_proto_init(void); + +struct mptcp_read_arg { + struct msghdr *msg; +}; + +int mptcp_read_actor(read_descriptor_t *desc, struct sk_buff *skb, + unsigned int offset, size_t len); + void mptcp_get_options(const struct sk_buff *skb, struct tcp_options_received *opt_rx); @@ -164,4 +219,11 @@ static inline struct mptcp_ext *mptcp_get_ext(struct sk_buff *skb) return (struct mptcp_ext *)skb_ext_find(skb, SKB_EXT_MPTCP); } +static inline bool before64(__u64 seq1, __u64 seq2) +{ + return (__s64)(seq1 - seq2) < 0; +} + +#define after64(seq2, seq1) before64(seq1, seq2) + #endif /* __MPTCP_PROTOCOL_H */ diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 89b91bc7a831..528351e26371 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -78,6 +78,7 @@ static void subflow_init_req(struct request_sock *req, subflow_req->mp_capable = 1; subflow_req->remote_key = rx_opt.mptcp.sndr_key; + subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq; } } @@ -116,6 +117,11 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb) subflow->remote_key); mptcp_finish_connect(sk); subflow->conn_finished = 1; + + if (skb) { + pr_debug("synack seq=%u", TCP_SKB_CB(skb)->seq); + subflow->ssn_offset = TCP_SKB_CB(skb)->seq; + } } } @@ -210,6 +216,323 @@ close_child: static struct inet_connection_sock_af_ops subflow_specific; +enum mapping_status { + MAPPING_OK, + MAPPING_INVALID, + MAPPING_EMPTY, + MAPPING_DATA_FIN +}; + +static u64 expand_seq(u64 old_seq, u16 old_data_len, u64 seq) +{ + if ((u32)seq == (u32)old_seq) + return old_seq; + + /* Assume map covers data not mapped yet. */ + return seq | ((old_seq + old_data_len + 1) & GENMASK_ULL(63, 32)); +} + +static void warn_bad_map(struct mptcp_subflow_context *subflow, u32 ssn) +{ + WARN_ONCE(1, "Bad mapping: ssn=%d map_seq=%d map_data_len=%d", + ssn, subflow->map_subflow_seq, subflow->map_data_len); +} + +static bool skb_is_fully_mapped(struct sock *ssk, struct sk_buff *skb) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); + unsigned int skb_consumed; + + skb_consumed = tcp_sk(ssk)->copied_seq - TCP_SKB_CB(skb)->seq; + if (WARN_ON_ONCE(skb_consumed >= skb->len)) + return true; + + return skb->len - skb_consumed <= subflow->map_data_len - + mptcp_subflow_get_map_offset(subflow); +} + +static bool validate_mapping(struct sock *ssk, struct sk_buff *skb) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); + u32 ssn = tcp_sk(ssk)->copied_seq - subflow->ssn_offset; + + if (unlikely(before(ssn, subflow->map_subflow_seq))) { + /* Mapping covers data later in the subflow stream, + * currently unsupported. + */ + warn_bad_map(subflow, ssn); + return false; + } + if (unlikely(!before(ssn, subflow->map_subflow_seq + + subflow->map_data_len))) { + /* Mapping does covers past subflow data, invalid */ + warn_bad_map(subflow, ssn + skb->len); + return false; + } + return true; +} + +static enum mapping_status get_mapping_status(struct sock *ssk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); + struct mptcp_ext *mpext; + struct sk_buff *skb; + u16 data_len; + u64 map_seq; + + skb = skb_peek(&ssk->sk_receive_queue); + if (!skb) + return MAPPING_EMPTY; + + mpext = mptcp_get_ext(skb); + if (!mpext || !mpext->use_map) { + if (!subflow->map_valid && !skb->len) { + /* the TCP stack deliver 0 len FIN pkt to the receive + * queue, that is the only 0len pkts ever expected here, + * and we can admit no mapping only for 0 len pkts + */ + if (!(TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) + WARN_ONCE(1, "0len seq %d:%d flags %x", + TCP_SKB_CB(skb)->seq, + TCP_SKB_CB(skb)->end_seq, + TCP_SKB_CB(skb)->tcp_flags); + sk_eat_skb(ssk, skb); + return MAPPING_EMPTY; + } + + if (!subflow->map_valid) + return MAPPING_INVALID; + + goto validate_seq; + } + + pr_debug("seq=%llu is64=%d ssn=%u data_len=%u data_fin=%d", + mpext->data_seq, mpext->dsn64, mpext->subflow_seq, + mpext->data_len, mpext->data_fin); + + data_len = mpext->data_len; + if (data_len == 0) { + pr_err("Infinite mapping not handled"); + return MAPPING_INVALID; + } + + if (mpext->data_fin == 1) { + if (data_len == 1) { + pr_debug("DATA_FIN with no payload"); + if (subflow->map_valid) { + /* A DATA_FIN might arrive in a DSS + * option before the previous mapping + * has been fully consumed. Continue + * handling the existing mapping. + */ + skb_ext_del(skb, SKB_EXT_MPTCP); + return MAPPING_OK; + } else { + return MAPPING_DATA_FIN; + } + } + + /* Adjust for DATA_FIN using 1 byte of sequence space */ + data_len--; + } + + if (!mpext->dsn64) { + map_seq = expand_seq(subflow->map_seq, subflow->map_data_len, + mpext->data_seq); + pr_debug("expanded seq=%llu", subflow->map_seq); + } else { + map_seq = mpext->data_seq; + } + + if (subflow->map_valid) { + /* Allow replacing only with an identical map */ + if (subflow->map_seq == map_seq && + subflow->map_subflow_seq == mpext->subflow_seq && + subflow->map_data_len == data_len) { + skb_ext_del(skb, SKB_EXT_MPTCP); + return MAPPING_OK; + } + + /* If this skb data are fully covered by the current mapping, + * the new map would need caching, which is not supported + */ + if (skb_is_fully_mapped(ssk, skb)) + return MAPPING_INVALID; + + /* will validate the next map after consuming the current one */ + return MAPPING_OK; + } + + subflow->map_seq = map_seq; + subflow->map_subflow_seq = mpext->subflow_seq; + subflow->map_data_len = data_len; + subflow->map_valid = 1; + pr_debug("new map seq=%llu subflow_seq=%u data_len=%u", + subflow->map_seq, subflow->map_subflow_seq, + subflow->map_data_len); + +validate_seq: + /* we revalidate valid mapping on new skb, because we must ensure + * the current skb is completely covered by the available mapping + */ + if (!validate_mapping(ssk, skb)) + return MAPPING_INVALID; + + skb_ext_del(skb, SKB_EXT_MPTCP); + return MAPPING_OK; +} + +static bool subflow_check_data_avail(struct sock *ssk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); + enum mapping_status status; + struct mptcp_sock *msk; + struct sk_buff *skb; + + pr_debug("msk=%p ssk=%p data_avail=%d skb=%p", subflow->conn, ssk, + subflow->data_avail, skb_peek(&ssk->sk_receive_queue)); + if (subflow->data_avail) + return true; + + if (!subflow->conn) + return false; + + msk = mptcp_sk(subflow->conn); + for (;;) { + u32 map_remaining; + size_t delta; + u64 ack_seq; + u64 old_ack; + + status = get_mapping_status(ssk); + pr_debug("msk=%p ssk=%p status=%d", msk, ssk, status); + if (status == MAPPING_INVALID) { + ssk->sk_err = EBADMSG; + goto fatal; + } + + if (status != MAPPING_OK) + return false; + + skb = skb_peek(&ssk->sk_receive_queue); + if (WARN_ON_ONCE(!skb)) + return false; + + old_ack = READ_ONCE(msk->ack_seq); + ack_seq = mptcp_subflow_get_mapped_dsn(subflow); + pr_debug("msk ack_seq=%llx subflow ack_seq=%llx", old_ack, + ack_seq); + if (ack_seq == old_ack) + break; + + /* only accept in-sequence mapping. Old values are spurious + * retransmission; we can hit "future" values on active backup + * subflow switch, we relay on retransmissions to get + * in-sequence data. + * Cuncurrent subflows support will require subflow data + * reordering + */ + map_remaining = subflow->map_data_len - + mptcp_subflow_get_map_offset(subflow); + if (before64(ack_seq, old_ack)) + delta = min_t(size_t, old_ack - ack_seq, map_remaining); + else + delta = min_t(size_t, ack_seq - old_ack, map_remaining); + + /* discard mapped data */ + pr_debug("discarding %zu bytes, current map len=%d", delta, + map_remaining); + if (delta) { + struct mptcp_read_arg arg = { + .msg = NULL, + }; + read_descriptor_t desc = { + .count = delta, + .arg.data = &arg, + }; + int ret; + + ret = tcp_read_sock(ssk, &desc, mptcp_read_actor); + if (ret < 0) { + ssk->sk_err = -ret; + goto fatal; + } + if (ret < delta) + return false; + if (delta == map_remaining) + subflow->map_valid = 0; + } + } + return true; + +fatal: + /* fatal protocol error, close the socket */ + /* This barrier is coupled with smp_rmb() in tcp_poll() */ + smp_wmb(); + ssk->sk_error_report(ssk); + tcp_set_state(ssk, TCP_CLOSE); + tcp_send_active_reset(ssk, GFP_ATOMIC); + return false; +} + +bool mptcp_subflow_data_available(struct sock *sk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct sk_buff *skb; + + /* check if current mapping is still valid */ + if (subflow->map_valid && + mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len) { + subflow->map_valid = 0; + subflow->data_avail = 0; + + pr_debug("Done with mapping: seq=%u data_len=%u", + subflow->map_subflow_seq, + subflow->map_data_len); + } + + if (!subflow_check_data_avail(sk)) { + subflow->data_avail = 0; + return false; + } + + skb = skb_peek(&sk->sk_receive_queue); + subflow->data_avail = skb && + before(tcp_sk(sk)->copied_seq, TCP_SKB_CB(skb)->end_seq); + return subflow->data_avail; +} + +static void subflow_data_ready(struct sock *sk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct sock *parent = subflow->conn; + + if (!parent || !subflow->mp_capable) { + subflow->tcp_data_ready(sk); + + if (parent) + parent->sk_data_ready(parent); + return; + } + + if (mptcp_subflow_data_available(sk)) { + set_bit(MPTCP_DATA_READY, &mptcp_sk(parent)->flags); + + parent->sk_data_ready(parent); + } +} + +static void subflow_write_space(struct sock *sk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct sock *parent = subflow->conn; + + sk_stream_write_space(sk); + if (parent && sk_stream_is_writeable(sk)) { + sk_stream_write_space(parent); + } +} + static struct inet_connection_sock_af_ops * subflow_default_af_ops(struct sock *sk) { @@ -296,6 +619,47 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk, return ctx; } +static void __subflow_state_change(struct sock *sk) +{ + struct socket_wq *wq; + + rcu_read_lock(); + wq = rcu_dereference(sk->sk_wq); + if (skwq_has_sleeper(wq)) + wake_up_interruptible_all(&wq->wait); + rcu_read_unlock(); +} + +static bool subflow_is_done(const struct sock *sk) +{ + return sk->sk_shutdown & RCV_SHUTDOWN || sk->sk_state == TCP_CLOSE; +} + +static void subflow_state_change(struct sock *sk) +{ + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct sock *parent = READ_ONCE(subflow->conn); + + __subflow_state_change(sk); + + /* as recvmsg() does not acquire the subflow socket for ssk selection + * a fin packet carrying a DSS can be unnoticed if we don't trigger + * the data available machinery here. + */ + if (parent && subflow->mp_capable && mptcp_subflow_data_available(sk)) { + set_bit(MPTCP_DATA_READY, &mptcp_sk(parent)->flags); + + parent->sk_data_ready(parent); + } + + if (parent && !(parent->sk_shutdown & RCV_SHUTDOWN) && + !subflow->rx_eof && subflow_is_done(sk)) { + subflow->rx_eof = 1; + parent->sk_shutdown |= RCV_SHUTDOWN; + __subflow_state_change(parent); + } +} + static int subflow_ulp_init(struct sock *sk) { struct inet_connection_sock *icsk = inet_csk(sk); @@ -322,6 +686,12 @@ static int subflow_ulp_init(struct sock *sk) tp->is_mptcp = 1; ctx->icsk_af_ops = icsk->icsk_af_ops; icsk->icsk_af_ops = subflow_default_af_ops(sk); + ctx->tcp_data_ready = sk->sk_data_ready; + ctx->tcp_state_change = sk->sk_state_change; + ctx->tcp_write_space = sk->sk_write_space; + sk->sk_data_ready = subflow_data_ready; + sk->sk_write_space = subflow_write_space; + sk->sk_state_change = subflow_state_change; out: return err; } @@ -339,10 +709,12 @@ static void subflow_ulp_release(struct sock *sk) kfree_rcu(ctx, rcu); } -static void subflow_ulp_fallback(struct sock *sk) +static void subflow_ulp_fallback(struct sock *sk, + struct mptcp_subflow_context *old_ctx) { struct inet_connection_sock *icsk = inet_csk(sk); + mptcp_subflow_tcp_fallback(sk, old_ctx); icsk->icsk_ulp_ops = NULL; rcu_assign_pointer(icsk->icsk_ulp_data, NULL); tcp_sk(sk)->is_mptcp = 0; @@ -357,23 +729,28 @@ static void subflow_ulp_clone(const struct request_sock *req, struct mptcp_subflow_context *new_ctx; if (!subflow_req->mp_capable) { - subflow_ulp_fallback(newsk); + subflow_ulp_fallback(newsk, old_ctx); return; } new_ctx = subflow_create_ctx(newsk, priority); if (new_ctx == NULL) { - subflow_ulp_fallback(newsk); + subflow_ulp_fallback(newsk, old_ctx); return; } new_ctx->conn_finished = 1; new_ctx->icsk_af_ops = old_ctx->icsk_af_ops; + new_ctx->tcp_data_ready = old_ctx->tcp_data_ready; + new_ctx->tcp_state_change = old_ctx->tcp_state_change; + new_ctx->tcp_write_space = old_ctx->tcp_write_space; new_ctx->mp_capable = 1; new_ctx->fourth_ack = 1; new_ctx->remote_key = subflow_req->remote_key; new_ctx->local_key = subflow_req->local_key; new_ctx->token = subflow_req->token; + new_ctx->ssn_offset = subflow_req->ssn_offset; + new_ctx->idsn = subflow_req->idsn; } static struct tcp_ulp_ops subflow_ulp_ops __read_mostly = { -- cgit v1.2.3-71-gd317 From cc7972ea1932335e0a0ee00ac8a24b3e8304630d Mon Sep 17 00:00:00 2001 From: Christoph Paasch Date: Tue, 21 Jan 2020 16:56:31 -0800 Subject: mptcp: parse and emit MP_CAPABLE option according to v1 spec This implements MP_CAPABLE options parsing and writing according to RFC 6824 bis / RFC 8684: MPTCP v1. Local key is sent on syn/ack, and both keys are sent on 3rd ack. MP_CAPABLE messages len are updated accordingly. We need the skbuff to correctly emit the above, so we push the skbuff struct as an argument all the way from tcp code to the relevant mptcp callbacks. When processing incoming MP_CAPABLE + data, build a full blown DSS-like map info, to simplify later processing. On child socket creation, we need to record the remote key, if available. Signed-off-by: Christoph Paasch Signed-off-by: David S. Miller --- include/linux/tcp.h | 3 +- include/net/mptcp.h | 17 +++--- net/ipv4/tcp_input.c | 2 +- net/ipv4/tcp_output.c | 2 +- net/mptcp/options.c | 162 ++++++++++++++++++++++++++++++++++++++++---------- net/mptcp/protocol.h | 6 +- net/mptcp/subflow.c | 14 ++++- 7 files changed, 160 insertions(+), 46 deletions(-) (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 0d00dad4b85d..4e2124607d32 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -94,7 +94,8 @@ struct mptcp_options_received { data_fin:1, use_ack:1, ack64:1, - __unused:3; + mpc_map:1, + __unused:2; }; #endif diff --git a/include/net/mptcp.h b/include/net/mptcp.h index 8619c1fca741..27627e2d1bc2 100644 --- a/include/net/mptcp.h +++ b/include/net/mptcp.h @@ -23,7 +23,8 @@ struct mptcp_ext { data_fin:1, use_ack:1, ack64:1, - __unused:3; + mpc_map:1, + __unused:2; /* one byte hole */ }; @@ -50,10 +51,10 @@ static inline bool rsk_is_mptcp(const struct request_sock *req) return tcp_rsk(req)->is_mptcp; } -void mptcp_parse_option(const unsigned char *ptr, int opsize, - struct tcp_options_received *opt_rx); -bool mptcp_syn_options(struct sock *sk, unsigned int *size, - struct mptcp_out_options *opts); +void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr, + int opsize, struct tcp_options_received *opt_rx); +bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb, + unsigned int *size, struct mptcp_out_options *opts); void mptcp_rcv_synsent(struct sock *sk); bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, struct mptcp_out_options *opts); @@ -121,12 +122,14 @@ static inline bool rsk_is_mptcp(const struct request_sock *req) return false; } -static inline void mptcp_parse_option(const unsigned char *ptr, int opsize, +static inline void mptcp_parse_option(const struct sk_buff *skb, + const unsigned char *ptr, int opsize, struct tcp_options_received *opt_rx) { } -static inline bool mptcp_syn_options(struct sock *sk, unsigned int *size, +static inline bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb, + unsigned int *size, struct mptcp_out_options *opts) { return false; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 28d31f2c1422..2f475b897c11 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3926,7 +3926,7 @@ void tcp_parse_options(const struct net *net, break; #endif case TCPOPT_MPTCP: - mptcp_parse_option(ptr, opsize, opt_rx); + mptcp_parse_option(skb, ptr, opsize, opt_rx); break; case TCPOPT_FASTOPEN: diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 5456076166da..fec4b3a4b22d 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -685,7 +685,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, if (sk_is_mptcp(sk)) { unsigned int size; - if (mptcp_syn_options(sk, &size, &opts->mptcp)) { + if (mptcp_syn_options(sk, skb, &size, &opts->mptcp)) { opts->options |= OPTION_MPTCP; remaining -= size; } diff --git a/net/mptcp/options.c b/net/mptcp/options.c index 1aec742ca8e1..8f82ff9a5a8e 100644 --- a/net/mptcp/options.c +++ b/net/mptcp/options.c @@ -14,8 +14,8 @@ static bool mptcp_cap_flag_sha256(u8 flags) return (flags & MPTCP_CAP_FLAG_MASK) == MPTCP_CAP_HMAC_SHA256; } -void mptcp_parse_option(const unsigned char *ptr, int opsize, - struct tcp_options_received *opt_rx) +void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr, + int opsize, struct tcp_options_received *opt_rx) { struct mptcp_options_received *mp_opt = &opt_rx->mptcp; u8 subtype = *ptr >> 4; @@ -25,13 +25,29 @@ void mptcp_parse_option(const unsigned char *ptr, int opsize, switch (subtype) { case MPTCPOPT_MP_CAPABLE: - if (opsize != TCPOLEN_MPTCP_MPC_SYN && - opsize != TCPOLEN_MPTCP_MPC_ACK) + /* strict size checking */ + if (!(TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)) { + if (skb->len > tcp_hdr(skb)->doff << 2) + expected_opsize = TCPOLEN_MPTCP_MPC_ACK_DATA; + else + expected_opsize = TCPOLEN_MPTCP_MPC_ACK; + } else { + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_ACK) + expected_opsize = TCPOLEN_MPTCP_MPC_SYNACK; + else + expected_opsize = TCPOLEN_MPTCP_MPC_SYN; + } + if (opsize != expected_opsize) break; + /* try to be gentle vs future versions on the initial syn */ version = *ptr++ & MPTCP_VERSION_MASK; - if (version != MPTCP_SUPPORTED_VERSION) + if (opsize != TCPOLEN_MPTCP_MPC_SYN) { + if (version != MPTCP_SUPPORTED_VERSION) + break; + } else if (version < MPTCP_SUPPORTED_VERSION) { break; + } flags = *ptr++; if (!mptcp_cap_flag_sha256(flags) || @@ -55,23 +71,40 @@ void mptcp_parse_option(const unsigned char *ptr, int opsize, break; mp_opt->mp_capable = 1; - mp_opt->sndr_key = get_unaligned_be64(ptr); - ptr += 8; - - if (opsize == TCPOLEN_MPTCP_MPC_ACK) { + if (opsize >= TCPOLEN_MPTCP_MPC_SYNACK) { + mp_opt->sndr_key = get_unaligned_be64(ptr); + ptr += 8; + } + if (opsize >= TCPOLEN_MPTCP_MPC_ACK) { mp_opt->rcvr_key = get_unaligned_be64(ptr); ptr += 8; - pr_debug("MP_CAPABLE sndr=%llu, rcvr=%llu", - mp_opt->sndr_key, mp_opt->rcvr_key); - } else { - pr_debug("MP_CAPABLE sndr=%llu", mp_opt->sndr_key); } + if (opsize == TCPOLEN_MPTCP_MPC_ACK_DATA) { + /* Section 3.1.: + * "the data parameters in a MP_CAPABLE are semantically + * equivalent to those in a DSS option and can be used + * interchangeably." + */ + mp_opt->dss = 1; + mp_opt->use_map = 1; + mp_opt->mpc_map = 1; + mp_opt->data_len = get_unaligned_be16(ptr); + ptr += 2; + } + pr_debug("MP_CAPABLE version=%x, flags=%x, optlen=%d sndr=%llu, rcvr=%llu len=%d", + version, flags, opsize, mp_opt->sndr_key, + mp_opt->rcvr_key, mp_opt->data_len); break; case MPTCPOPT_DSS: pr_debug("DSS"); ptr++; + /* we must clear 'mpc_map' be able to detect MP_CAPABLE + * map vs DSS map in mptcp_incoming_options(), and reconstruct + * map info accordingly + */ + mp_opt->mpc_map = 0; flags = (*ptr++) & MPTCP_DSS_FLAG_MASK; mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0; mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0; @@ -176,18 +209,22 @@ void mptcp_get_options(const struct sk_buff *skb, if (opsize > length) return; /* don't parse partial options */ if (opcode == TCPOPT_MPTCP) - mptcp_parse_option(ptr, opsize, opt_rx); + mptcp_parse_option(skb, ptr, opsize, opt_rx); ptr += opsize - 2; length -= opsize; } } } -bool mptcp_syn_options(struct sock *sk, unsigned int *size, - struct mptcp_out_options *opts) +bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb, + unsigned int *size, struct mptcp_out_options *opts) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + /* we will use snd_isn to detect first pkt [re]transmission + * in mptcp_established_options_mp() + */ + subflow->snd_isn = TCP_SKB_CB(skb)->end_seq; if (subflow->request_mptcp) { pr_debug("local_key=%llu", subflow->local_key); opts->suboptions = OPTION_MPTCP_MPC_SYN; @@ -212,20 +249,52 @@ void mptcp_rcv_synsent(struct sock *sk) } } -static bool mptcp_established_options_mp(struct sock *sk, unsigned int *size, +static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb, + unsigned int *size, unsigned int remaining, struct mptcp_out_options *opts) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); + struct mptcp_ext *mpext; + unsigned int data_len; + + pr_debug("subflow=%p fourth_ack=%d seq=%x:%x remaining=%d", subflow, + subflow->fourth_ack, subflow->snd_isn, + skb ? TCP_SKB_CB(skb)->seq : 0, remaining); + + if (subflow->mp_capable && !subflow->fourth_ack && skb && + subflow->snd_isn == TCP_SKB_CB(skb)->seq) { + /* When skb is not available, we better over-estimate the + * emitted options len. A full DSS option is longer than + * TCPOLEN_MPTCP_MPC_ACK_DATA, so let's the caller try to fit + * that. + */ + mpext = mptcp_get_ext(skb); + data_len = mpext ? mpext->data_len : 0; - if (!subflow->fourth_ack) { + /* we will check ext_copy.data_len in mptcp_write_options() to + * discriminate between TCPOLEN_MPTCP_MPC_ACK_DATA and + * TCPOLEN_MPTCP_MPC_ACK + */ + opts->ext_copy.data_len = data_len; opts->suboptions = OPTION_MPTCP_MPC_ACK; opts->sndr_key = subflow->local_key; opts->rcvr_key = subflow->remote_key; - *size = TCPOLEN_MPTCP_MPC_ACK; - subflow->fourth_ack = 1; - pr_debug("subflow=%p, local_key=%llu, remote_key=%llu", - subflow, subflow->local_key, subflow->remote_key); + + /* Section 3.1. + * The MP_CAPABLE option is carried on the SYN, SYN/ACK, and ACK + * packets that start the first subflow of an MPTCP connection, + * as well as the first packet that carries data + */ + if (data_len > 0) + *size = ALIGN(TCPOLEN_MPTCP_MPC_ACK_DATA, 4); + else + *size = TCPOLEN_MPTCP_MPC_ACK; + + pr_debug("subflow=%p, local_key=%llu, remote_key=%llu map_len=%d", + subflow, subflow->local_key, subflow->remote_key, + data_len); + return true; } return false; @@ -319,7 +388,7 @@ bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, unsigned int opt_size = 0; bool ret = false; - if (mptcp_established_options_mp(sk, &opt_size, remaining, opts)) + if (mptcp_established_options_mp(sk, skb, &opt_size, remaining, opts)) ret = true; else if (mptcp_established_options_dss(sk, skb, &opt_size, remaining, opts)) @@ -371,11 +440,26 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb, memset(mpext, 0, sizeof(*mpext)); if (mp_opt->use_map) { - mpext->data_seq = mp_opt->data_seq; - mpext->subflow_seq = mp_opt->subflow_seq; + if (mp_opt->mpc_map) { + struct mptcp_subflow_context *subflow = + mptcp_subflow_ctx(sk); + + /* this is an MP_CAPABLE carrying MPTCP data + * we know this map the first chunk of data + */ + mptcp_crypto_key_sha(subflow->remote_key, NULL, + &mpext->data_seq); + mpext->data_seq++; + mpext->subflow_seq = 1; + mpext->dsn64 = 1; + mpext->mpc_map = 1; + } else { + mpext->data_seq = mp_opt->data_seq; + mpext->subflow_seq = mp_opt->subflow_seq; + mpext->dsn64 = mp_opt->dsn64; + } mpext->data_len = mp_opt->data_len; mpext->use_map = 1; - mpext->dsn64 = mp_opt->dsn64; } if (mp_opt->use_ack) { @@ -389,8 +473,7 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb, void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) { - if ((OPTION_MPTCP_MPC_SYN | - OPTION_MPTCP_MPC_SYNACK | + if ((OPTION_MPTCP_MPC_SYN | OPTION_MPTCP_MPC_SYNACK | OPTION_MPTCP_MPC_ACK) & opts->suboptions) { u8 len; @@ -398,6 +481,8 @@ void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) len = TCPOLEN_MPTCP_MPC_SYN; else if (OPTION_MPTCP_MPC_SYNACK & opts->suboptions) len = TCPOLEN_MPTCP_MPC_SYNACK; + else if (opts->ext_copy.data_len) + len = TCPOLEN_MPTCP_MPC_ACK_DATA; else len = TCPOLEN_MPTCP_MPC_ACK; @@ -405,14 +490,27 @@ void mptcp_write_options(__be32 *ptr, struct mptcp_out_options *opts) (MPTCPOPT_MP_CAPABLE << 12) | (MPTCP_SUPPORTED_VERSION << 8) | MPTCP_CAP_HMAC_SHA256); + + if (!((OPTION_MPTCP_MPC_SYNACK | OPTION_MPTCP_MPC_ACK) & + opts->suboptions)) + goto mp_capable_done; + put_unaligned_be64(opts->sndr_key, ptr); ptr += 2; - if (OPTION_MPTCP_MPC_ACK & opts->suboptions) { - put_unaligned_be64(opts->rcvr_key, ptr); - ptr += 2; - } + if (!((OPTION_MPTCP_MPC_ACK) & opts->suboptions)) + goto mp_capable_done; + + put_unaligned_be64(opts->rcvr_key, ptr); + ptr += 2; + if (!opts->ext_copy.data_len) + goto mp_capable_done; + + put_unaligned_be32(opts->ext_copy.data_len << 16 | + TCPOPT_NOP << 8 | TCPOPT_NOP, ptr); + ptr += 1; } +mp_capable_done: if (opts->ext_copy.use_ack || opts->ext_copy.use_map) { struct mptcp_ext *mpext = &opts->ext_copy; u8 len = TCPOLEN_MPTCP_DSS_BASE; diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index a355bb1cf31b..36b90024d34d 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -11,7 +11,7 @@ #include #include -#define MPTCP_SUPPORTED_VERSION 0 +#define MPTCP_SUPPORTED_VERSION 1 /* MPTCP option bits */ #define OPTION_MPTCP_MPC_SYN BIT(0) @@ -29,9 +29,10 @@ #define MPTCPOPT_MP_FASTCLOSE 7 /* MPTCP suboption lengths */ -#define TCPOLEN_MPTCP_MPC_SYN 12 +#define TCPOLEN_MPTCP_MPC_SYN 4 #define TCPOLEN_MPTCP_MPC_SYNACK 12 #define TCPOLEN_MPTCP_MPC_ACK 20 +#define TCPOLEN_MPTCP_MPC_ACK_DATA 22 #define TCPOLEN_MPTCP_DSS_BASE 4 #define TCPOLEN_MPTCP_DSS_ACK32 4 #define TCPOLEN_MPTCP_DSS_ACK64 8 @@ -106,6 +107,7 @@ struct mptcp_subflow_context { u64 remote_key; u64 idsn; u64 map_seq; + u32 snd_isn; u32 token; u32 rel_write_seq; u32 map_subflow_seq; diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 9fb3eb87a20f..8892855f4f52 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -77,7 +77,6 @@ static void subflow_init_req(struct request_sock *req, if (err == 0) subflow_req->mp_capable = 1; - subflow_req->remote_key = rx_opt.mptcp.sndr_key; subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq; } } @@ -180,11 +179,22 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk, bool *own_req) { struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); + struct mptcp_subflow_request_sock *subflow_req; + struct tcp_options_received opt_rx; struct sock *child; pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn); - /* if the sk is MP_CAPABLE, we already received the client key */ + /* if the sk is MP_CAPABLE, we need to fetch the client key */ + subflow_req = mptcp_subflow_rsk(req); + if (subflow_req->mp_capable) { + opt_rx.mptcp.mp_capable = 0; + mptcp_get_options(skb, &opt_rx); + if (!opt_rx.mptcp.mp_capable) + subflow_req->mp_capable = 0; + else + subflow_req->remote_key = opt_rx.mptcp.sndr_key; + } child = listener->icsk_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash, own_req); -- cgit v1.2.3-71-gd317 From e42f1ac626e7f799717d006e0f8393b6d6f9fc8c Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Fri, 24 Jan 2020 16:04:02 -0800 Subject: mptcp: do not inherit inet proto ops We need to initialise the struct ourselves, else we expose tcp-specific callbacks such as tcp_splice_read which will then trigger splat because the socket is an mptcp one: BUG: KASAN: slab-out-of-bounds in tcp_mstamp_refresh+0x80/0xa0 net/ipv4/tcp_output.c:57 Write of size 8 at addr ffff888116aa21d0 by task syz-executor.0/5478 CPU: 1 PID: 5478 Comm: syz-executor.0 Not tainted 5.5.0-rc6 #3 Call Trace: tcp_mstamp_refresh+0x80/0xa0 net/ipv4/tcp_output.c:57 tcp_rcv_space_adjust+0x72/0x7f0 net/ipv4/tcp_input.c:612 tcp_read_sock+0x622/0x990 net/ipv4/tcp.c:1674 tcp_splice_read+0x20b/0xb40 net/ipv4/tcp.c:791 do_splice+0x1259/0x1560 fs/splice.c:1205 To prevent build error with ipv6, add the recv/sendmsg function declaration to ipv6.h. The functions are already accessible "thanks" to retpoline related work, but they are currently only made visible by socket.c specific INDIRECT_CALLABLE macros. Reported-by: Christoph Paasch Signed-off-by: Florian Westphal Signed-off-by: Mat Martineau Signed-off-by: David S. Miller --- include/net/ipv6.h | 3 +++ net/mptcp/protocol.c | 70 ++++++++++++++++++++++++++++++++++++++-------------- 2 files changed, 54 insertions(+), 19 deletions(-) (limited to 'include') diff --git a/include/net/ipv6.h b/include/net/ipv6.h index 4e95f6df508c..cec1a54401f2 100644 --- a/include/net/ipv6.h +++ b/include/net/ipv6.h @@ -1113,6 +1113,9 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); int inet6_hash_connect(struct inet_timewait_death_row *death_row, struct sock *sk); +int inet6_sendmsg(struct socket *sock, struct msghdr *msg, size_t size); +int inet6_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, + int flags); /* * reassembly.c diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index ee916b3686fe..41d49126d29a 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1163,7 +1163,31 @@ out_unlock: return ret; } -static struct proto_ops mptcp_stream_ops; +static const struct proto_ops mptcp_stream_ops = { + .family = PF_INET, + .owner = THIS_MODULE, + .release = inet_release, + .bind = mptcp_bind, + .connect = mptcp_stream_connect, + .socketpair = sock_no_socketpair, + .accept = mptcp_stream_accept, + .getname = mptcp_v4_getname, + .poll = mptcp_poll, + .ioctl = inet_ioctl, + .gettstamp = sock_gettstamp, + .listen = mptcp_listen, + .shutdown = mptcp_shutdown, + .setsockopt = sock_common_setsockopt, + .getsockopt = sock_common_getsockopt, + .sendmsg = inet_sendmsg, + .recvmsg = inet_recvmsg, + .mmap = sock_no_mmap, + .sendpage = inet_sendpage, +#ifdef CONFIG_COMPAT + .compat_setsockopt = compat_sock_common_setsockopt, + .compat_getsockopt = compat_sock_common_getsockopt, +#endif +}; static struct inet_protosw mptcp_protosw = { .type = SOCK_STREAM, @@ -1176,14 +1200,6 @@ static struct inet_protosw mptcp_protosw = { void mptcp_proto_init(void) { mptcp_prot.h.hashinfo = tcp_prot.h.hashinfo; - mptcp_stream_ops = inet_stream_ops; - mptcp_stream_ops.bind = mptcp_bind; - mptcp_stream_ops.connect = mptcp_stream_connect; - mptcp_stream_ops.poll = mptcp_poll; - mptcp_stream_ops.accept = mptcp_stream_accept; - mptcp_stream_ops.getname = mptcp_v4_getname; - mptcp_stream_ops.listen = mptcp_listen; - mptcp_stream_ops.shutdown = mptcp_shutdown; mptcp_subflow_init(); @@ -1194,7 +1210,32 @@ void mptcp_proto_init(void) } #if IS_ENABLED(CONFIG_MPTCP_IPV6) -static struct proto_ops mptcp_v6_stream_ops; +static const struct proto_ops mptcp_v6_stream_ops = { + .family = PF_INET6, + .owner = THIS_MODULE, + .release = inet6_release, + .bind = mptcp_bind, + .connect = mptcp_stream_connect, + .socketpair = sock_no_socketpair, + .accept = mptcp_stream_accept, + .getname = mptcp_v6_getname, + .poll = mptcp_poll, + .ioctl = inet6_ioctl, + .gettstamp = sock_gettstamp, + .listen = mptcp_listen, + .shutdown = mptcp_shutdown, + .setsockopt = sock_common_setsockopt, + .getsockopt = sock_common_getsockopt, + .sendmsg = inet6_sendmsg, + .recvmsg = inet6_recvmsg, + .mmap = sock_no_mmap, + .sendpage = inet_sendpage, +#ifdef CONFIG_COMPAT + .compat_setsockopt = compat_sock_common_setsockopt, + .compat_getsockopt = compat_sock_common_getsockopt, +#endif +}; + static struct proto mptcp_v6_prot; static void mptcp_v6_destroy(struct sock *sk) @@ -1226,15 +1267,6 @@ int mptcp_proto_v6_init(void) if (err) return err; - mptcp_v6_stream_ops = inet6_stream_ops; - mptcp_v6_stream_ops.bind = mptcp_bind; - mptcp_v6_stream_ops.connect = mptcp_stream_connect; - mptcp_v6_stream_ops.poll = mptcp_poll; - mptcp_v6_stream_ops.accept = mptcp_stream_accept; - mptcp_v6_stream_ops.getname = mptcp_v6_getname; - mptcp_v6_stream_ops.listen = mptcp_listen; - mptcp_v6_stream_ops.shutdown = mptcp_shutdown; - err = inet6_register_protosw(&mptcp_v6_protosw); if (err) proto_unregister(&mptcp_v6_prot); -- cgit v1.2.3-71-gd317 From ef6aadcc76c97e25f62adc4e9d19684d3e5d0b87 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Fri, 24 Jan 2020 15:23:06 +0200 Subject: net: sched: Make TBF Qdisc offloadable Invoke ndo_setup_tc as appropriate to signal init / replacement, destroying and dumping of TBF Qdisc. Signed-off-by: Petr Machata Acked-by: Jiri Pirko Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller --- include/linux/netdevice.h | 1 + include/net/pkt_cls.h | 22 +++++++++++++++++++ net/sched/sch_tbf.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 78 insertions(+) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 5ec3537fbdb1..11bdf6cb30bd 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -850,6 +850,7 @@ enum tc_setup_type { TC_SETUP_QDISC_TAPRIO, TC_SETUP_FT, TC_SETUP_QDISC_ETS, + TC_SETUP_QDISC_TBF, }; /* These structures hold the attributes of bpf state that are being passed diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index 47b115e2012a..ce036492986a 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -854,4 +854,26 @@ struct tc_ets_qopt_offload { }; }; +enum tc_tbf_command { + TC_TBF_REPLACE, + TC_TBF_DESTROY, + TC_TBF_STATS, +}; + +struct tc_tbf_qopt_offload_replace_params { + struct psched_ratecfg rate; + u32 max_size; + struct gnet_stats_queue *qstats; +}; + +struct tc_tbf_qopt_offload { + enum tc_tbf_command command; + u32 handle; + u32 parent; + union { + struct tc_tbf_qopt_offload_replace_params replace_params; + struct tc_qopt_offload_stats stats; + }; +}; + #endif diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c index 7ae317958090..78e79029dc63 100644 --- a/net/sched/sch_tbf.c +++ b/net/sched/sch_tbf.c @@ -15,6 +15,7 @@ #include #include #include +#include #include @@ -137,6 +138,52 @@ static u64 psched_ns_t2l(const struct psched_ratecfg *r, return len; } +static void tbf_offload_change(struct Qdisc *sch) +{ + struct tbf_sched_data *q = qdisc_priv(sch); + struct net_device *dev = qdisc_dev(sch); + struct tc_tbf_qopt_offload qopt; + + if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) + return; + + qopt.command = TC_TBF_REPLACE; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + qopt.replace_params.rate = q->rate; + qopt.replace_params.max_size = q->max_size; + qopt.replace_params.qstats = &sch->qstats; + + dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TBF, &qopt); +} + +static void tbf_offload_destroy(struct Qdisc *sch) +{ + struct net_device *dev = qdisc_dev(sch); + struct tc_tbf_qopt_offload qopt; + + if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) + return; + + qopt.command = TC_TBF_DESTROY; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TBF, &qopt); +} + +static int tbf_offload_dump(struct Qdisc *sch) +{ + struct tc_tbf_qopt_offload qopt; + + qopt.command = TC_TBF_STATS; + qopt.handle = sch->handle; + qopt.parent = sch->parent; + qopt.stats.bstats = &sch->bstats; + qopt.stats.qstats = &sch->qstats; + + return qdisc_offload_dump_helper(sch, TC_SETUP_QDISC_TBF, &qopt); +} + /* GSO packet is too big, segment it so that tbf can transmit * each segment in time */ @@ -407,6 +454,8 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt, sch_tree_unlock(sch); err = 0; + + tbf_offload_change(sch); done: return err; } @@ -432,6 +481,7 @@ static void tbf_destroy(struct Qdisc *sch) struct tbf_sched_data *q = qdisc_priv(sch); qdisc_watchdog_cancel(&q->watchdog); + tbf_offload_destroy(sch); qdisc_put(q->qdisc); } @@ -440,6 +490,11 @@ static int tbf_dump(struct Qdisc *sch, struct sk_buff *skb) struct tbf_sched_data *q = qdisc_priv(sch); struct nlattr *nest; struct tc_tbf_qopt opt; + int err; + + err = tbf_offload_dump(sch); + if (err) + return err; nest = nla_nest_start_noflag(skb, TCA_OPTIONS); if (nest == NULL) -- cgit v1.2.3-71-gd317 From e9b4e606c2289d6610113253922bb8c9ac7f68b0 Mon Sep 17 00:00:00 2001 From: Jiri Olsa Date: Thu, 23 Jan 2020 17:15:07 +0100 Subject: bpf: Allow to resolve bpf trampoline and dispatcher in unwind When unwinding the stack we need to identify each address to successfully continue. Adding latch tree to keep trampolines for quick lookup during the unwind. The patch uses first 48 bytes for latch tree node, leaving 4048 bytes from the rest of the page for trampoline or dispatcher generated code. It's still enough not to affect trampoline and dispatcher progs maximum counts. Signed-off-by: Jiri Olsa Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20200123161508.915203-3-jolsa@kernel.org --- include/linux/bpf.h | 12 +++++++- kernel/bpf/dispatcher.c | 4 +-- kernel/bpf/trampoline.c | 80 ++++++++++++++++++++++++++++++++++++++++++++----- kernel/extable.c | 7 +++-- 4 files changed, 90 insertions(+), 13 deletions(-) (limited to 'include') diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a9687861fd7e..8e9ad3943cd9 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -525,7 +525,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key); int bpf_trampoline_link_prog(struct bpf_prog *prog); int bpf_trampoline_unlink_prog(struct bpf_prog *prog); void bpf_trampoline_put(struct bpf_trampoline *tr); -void *bpf_jit_alloc_exec_page(void); #define BPF_DISPATCHER_INIT(name) { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ .func = &name##func, \ @@ -557,6 +556,13 @@ void *bpf_jit_alloc_exec_page(void); #define BPF_DISPATCHER_PTR(name) (&name) void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to); +struct bpf_image { + struct latch_tree_node tnode; + unsigned char data[]; +}; +#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image)) +bool is_bpf_image_address(unsigned long address); +void *bpf_image_alloc(void); #else static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { @@ -578,6 +584,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {} static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, struct bpf_prog *to) {} +static inline bool is_bpf_image_address(unsigned long address) +{ + return false; +} #endif struct bpf_func_info_aux { diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c index 204ee61a3904..b3e5b214fed8 100644 --- a/kernel/bpf/dispatcher.c +++ b/kernel/bpf/dispatcher.c @@ -113,7 +113,7 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs) noff = 0; } else { old = d->image + d->image_off; - noff = d->image_off ^ (PAGE_SIZE / 2); + noff = d->image_off ^ (BPF_IMAGE_SIZE / 2); } new = d->num_progs ? d->image + noff : NULL; @@ -140,7 +140,7 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from, mutex_lock(&d->mutex); if (!d->image) { - d->image = bpf_jit_alloc_exec_page(); + d->image = bpf_image_alloc(); if (!d->image) goto out; } diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index eb64c245052b..6b264a92064b 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -4,6 +4,7 @@ #include #include #include +#include /* dummy _ops. The verifier will operate on target program's ops. */ const struct bpf_verifier_ops bpf_extension_verifier_ops = { @@ -16,11 +17,12 @@ const struct bpf_prog_ops bpf_extension_prog_ops = { #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS) static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE]; +static struct latch_tree_root image_tree __cacheline_aligned; -/* serializes access to trampoline_table */ +/* serializes access to trampoline_table and image_tree */ static DEFINE_MUTEX(trampoline_mutex); -void *bpf_jit_alloc_exec_page(void) +static void *bpf_jit_alloc_exec_page(void) { void *image; @@ -36,6 +38,64 @@ void *bpf_jit_alloc_exec_page(void) return image; } +static __always_inline bool image_tree_less(struct latch_tree_node *a, + struct latch_tree_node *b) +{ + struct bpf_image *ia = container_of(a, struct bpf_image, tnode); + struct bpf_image *ib = container_of(b, struct bpf_image, tnode); + + return ia < ib; +} + +static __always_inline int image_tree_comp(void *addr, struct latch_tree_node *n) +{ + void *image = container_of(n, struct bpf_image, tnode); + + if (addr < image) + return -1; + if (addr >= image + PAGE_SIZE) + return 1; + + return 0; +} + +static const struct latch_tree_ops image_tree_ops = { + .less = image_tree_less, + .comp = image_tree_comp, +}; + +static void *__bpf_image_alloc(bool lock) +{ + struct bpf_image *image; + + image = bpf_jit_alloc_exec_page(); + if (!image) + return NULL; + + if (lock) + mutex_lock(&trampoline_mutex); + latch_tree_insert(&image->tnode, &image_tree, &image_tree_ops); + if (lock) + mutex_unlock(&trampoline_mutex); + return image->data; +} + +void *bpf_image_alloc(void) +{ + return __bpf_image_alloc(true); +} + +bool is_bpf_image_address(unsigned long addr) +{ + bool ret; + + rcu_read_lock(); + ret = latch_tree_find((void *) addr, &image_tree, &image_tree_ops) != NULL; + rcu_read_unlock(); + + return ret; +} + struct bpf_trampoline *bpf_trampoline_lookup(u64 key) { struct bpf_trampoline *tr; @@ -56,7 +116,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key) goto out; /* is_root was checked earlier. No need for bpf_jit_charge_modmem() */ - image = bpf_jit_alloc_exec_page(); + image = __bpf_image_alloc(false); if (!image) { kfree(tr); tr = NULL; @@ -131,14 +191,14 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) } /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50 - * bytes on x86. Pick a number to fit into PAGE_SIZE / 2 + * bytes on x86. Pick a number to fit into BPF_IMAGE_SIZE / 2 */ #define BPF_MAX_TRAMP_PROGS 40 static int bpf_trampoline_update(struct bpf_trampoline *tr) { - void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2; - void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2; + void *old_image = tr->image + ((tr->selector + 1) & 1) * BPF_IMAGE_SIZE/2; + void *new_image = tr->image + (tr->selector & 1) * BPF_IMAGE_SIZE/2; struct bpf_prog *progs_to_run[BPF_MAX_TRAMP_PROGS]; int fentry_cnt = tr->progs_cnt[BPF_TRAMP_FENTRY]; int fexit_cnt = tr->progs_cnt[BPF_TRAMP_FEXIT]; @@ -174,7 +234,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) */ synchronize_rcu_tasks(); - err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2, + err = arch_prepare_bpf_trampoline(new_image, new_image + BPF_IMAGE_SIZE / 2, &tr->func.model, flags, fentry, fentry_cnt, fexit, fexit_cnt, @@ -284,6 +344,8 @@ out: void bpf_trampoline_put(struct bpf_trampoline *tr) { + struct bpf_image *image; + if (!tr) return; mutex_lock(&trampoline_mutex); @@ -294,9 +356,11 @@ void bpf_trampoline_put(struct bpf_trampoline *tr) goto out; if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT]))) goto out; + image = container_of(tr->image, struct bpf_image, data); + latch_tree_erase(&image->tnode, &image_tree, &image_tree_ops); /* wait for tasks to get out of trampoline before freeing it */ synchronize_rcu_tasks(); - bpf_jit_free_exec(tr->image); + bpf_jit_free_exec(image); hlist_del(&tr->hlist); kfree(tr); out: diff --git a/kernel/extable.c b/kernel/extable.c index f6920a11e28a..a0024f27d3a1 100644 --- a/kernel/extable.c +++ b/kernel/extable.c @@ -131,8 +131,9 @@ int kernel_text_address(unsigned long addr) * triggers a stack trace, or a WARN() that happens during * coming back from idle, or cpu on or offlining. * - * is_module_text_address() as well as the kprobe slots - * and is_bpf_text_address() require RCU to be watching. + * is_module_text_address() as well as the kprobe slots, + * is_bpf_text_address() and is_bpf_image_address require + * RCU to be watching. */ no_rcu = !rcu_is_watching(); @@ -148,6 +149,8 @@ int kernel_text_address(unsigned long addr) goto out; if (is_bpf_text_address(addr)) goto out; + if (is_bpf_image_address(addr)) + goto out; ret = 0; out: if (no_rcu) -- cgit v1.2.3-71-gd317 From 32efcc06d2a15fa87585614d12d6c2308cc2d3f3 Mon Sep 17 00:00:00 2001 From: Abdul Kabbani Date: Fri, 24 Jan 2020 16:34:02 -0500 Subject: tcp: export count for rehash attempts Using IPv6 flow-label to swiftly route around avoid congested or disconnected network path can greatly improve TCP reliability. This patch adds SNMP counters and a OPT_STATS counter to track both host-level and connection-level statistics. Network administrators can use these counters to evaluate the impact of this new ability better. Export count for rehash attempts to 1) two SNMP counters: TcpTimeoutRehash (rehash due to timeouts), and TcpDuplicateDataRehash (rehash due to receiving duplicate packets) 2) Timestamping API SOF_TIMESTAMPING_OPT_STATS. Signed-off-by: Abdul Kabbani Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Kevin(Yudong) Yang Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller --- include/linux/tcp.h | 2 ++ include/uapi/linux/snmp.h | 2 ++ include/uapi/linux/tcp.h | 1 + net/ipv4/proc.c | 2 ++ net/ipv4/tcp.c | 2 ++ net/ipv4/tcp_input.c | 4 +++- net/ipv4/tcp_timer.c | 6 ++++++ 7 files changed, 18 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 4e2124607d32..1cf73e6f85ca 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -386,6 +386,8 @@ struct tcp_sock { #define BPF_SOCK_OPS_TEST_FLAG(TP, ARG) 0 #endif + u16 timeout_rehash; /* Timeout-triggered rehash attempts */ + u32 rcv_ooopack; /* Received out-of-order packets, for tcpinfo */ /* Receiver side RTT estimation */ diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h index 7eee233e78d2..7d91f4debc48 100644 --- a/include/uapi/linux/snmp.h +++ b/include/uapi/linux/snmp.h @@ -285,6 +285,8 @@ enum LINUX_MIB_TCPRCVQDROP, /* TCPRcvQDrop */ LINUX_MIB_TCPWQUEUETOOBIG, /* TCPWqueueTooBig */ LINUX_MIB_TCPFASTOPENPASSIVEALTKEY, /* TCPFastOpenPassiveAltKey */ + LINUX_MIB_TCPTIMEOUTREHASH, /* TCPTimeoutRehash */ + LINUX_MIB_TCPDUPLICATEDATAREHASH, /* TCPDuplicateDataRehash */ __LINUX_MIB_MAX }; diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index d87184e673ca..fd9eb8f6bcae 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -311,6 +311,7 @@ enum { TCP_NLA_DSACK_DUPS, /* DSACK blocks received */ TCP_NLA_REORD_SEEN, /* reordering events seen */ TCP_NLA_SRTT, /* smoothed RTT in usecs */ + TCP_NLA_TIMEOUT_REHASH, /* Timeout-triggered rehash attempts */ }; /* for TCP_MD5SIG socket option */ diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c index cc90243ccf76..2580303249e2 100644 --- a/net/ipv4/proc.c +++ b/net/ipv4/proc.c @@ -289,6 +289,8 @@ static const struct snmp_mib snmp4_net_list[] = { SNMP_MIB_ITEM("TCPRcvQDrop", LINUX_MIB_TCPRCVQDROP), SNMP_MIB_ITEM("TCPWqueueTooBig", LINUX_MIB_TCPWQUEUETOOBIG), SNMP_MIB_ITEM("TCPFastOpenPassiveAltKey", LINUX_MIB_TCPFASTOPENPASSIVEALTKEY), + SNMP_MIB_ITEM("TcpTimeoutRehash", LINUX_MIB_TCPTIMEOUTREHASH), + SNMP_MIB_ITEM("TcpDuplicateDataRehash", LINUX_MIB_TCPDUPLICATEDATAREHASH), SNMP_MIB_SENTINEL }; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 2857c853e2fd..484485ae74c2 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3337,6 +3337,7 @@ static size_t tcp_opt_stats_get_size(void) nla_total_size(sizeof(u32)) + /* TCP_NLA_DSACK_DUPS */ nla_total_size(sizeof(u32)) + /* TCP_NLA_REORD_SEEN */ nla_total_size(sizeof(u32)) + /* TCP_NLA_SRTT */ + nla_total_size(sizeof(u16)) + /* TCP_NLA_TIMEOUT_REHASH */ 0; } @@ -3391,6 +3392,7 @@ struct sk_buff *tcp_get_timestamping_opt_stats(const struct sock *sk) nla_put_u32(stats, TCP_NLA_DSACK_DUPS, tp->dsack_dups); nla_put_u32(stats, TCP_NLA_REORD_SEEN, tp->reord_seen); nla_put_u32(stats, TCP_NLA_SRTT, tp->srtt_us >> 3); + nla_put_u16(stats, TCP_NLA_TIMEOUT_REHASH, tp->timeout_rehash); return stats; } diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 4915de65d278..e8b840a4767e 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4271,8 +4271,10 @@ static void tcp_rcv_spurious_retrans(struct sock *sk, const struct sk_buff *skb) * The receiver remembers and reflects via DSACKs. Leverage the * DSACK state and change the txhash to re-route speculatively. */ - if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq) + if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq) { sk_rethink_txhash(sk); + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDUPLICATEDATAREHASH); + } } static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb) diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 1097b438befe..c3f26dcd6704 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -223,6 +223,9 @@ static int tcp_write_timeout(struct sock *sk) dst_negative_advice(sk); } else { sk_rethink_txhash(sk); + tp->timeout_rehash++; + __NET_INC_STATS(sock_net(sk), + LINUX_MIB_TCPTIMEOUTREHASH); } retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries; expired = icsk->icsk_retransmits >= retry_until; @@ -234,6 +237,9 @@ static int tcp_write_timeout(struct sock *sk) dst_negative_advice(sk); } else { sk_rethink_txhash(sk); + tp->timeout_rehash++; + __NET_INC_STATS(sock_net(sk), + LINUX_MIB_TCPTIMEOUTREHASH); } retry_until = net->ipv4.sysctl_tcp_retries2; -- cgit v1.2.3-71-gd317 From 7b225d0b5c6dda5fefab578175f210c6fc7e389a Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Wed, 22 Jan 2020 00:17:52 +0100 Subject: netfilter: nf_tables: add NFTA_SET_ELEM_KEY_END attribute Add NFTA_SET_ELEM_KEY_END attribute to convey the closing element of the interval between kernel and userspace. This patch also adds the NFT_SET_EXT_KEY_END extension to store the closing element value in this interval. v4: No changes v3: New patch [sbrivio: refactor error paths and labels; add corresponding nft_set_ext_type for new key; rebase] Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_tables.h | 14 +++++- include/uapi/linux/netfilter/nf_tables.h | 2 + net/netfilter/nf_tables_api.c | 85 +++++++++++++++++++++++--------- net/netfilter/nft_dynset.c | 2 +- 4 files changed, 79 insertions(+), 24 deletions(-) (limited to 'include') diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index fe7c50acc681..504c0aa93805 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -231,6 +231,7 @@ struct nft_userdata { * struct nft_set_elem - generic representation of set elements * * @key: element key + * @key_end: closing element key * @priv: element private data and extensions */ struct nft_set_elem { @@ -238,6 +239,10 @@ struct nft_set_elem { u32 buf[NFT_DATA_VALUE_MAXLEN / sizeof(u32)]; struct nft_data val; } key; + union { + u32 buf[NFT_DATA_VALUE_MAXLEN / sizeof(u32)]; + struct nft_data val; + } key_end; void *priv; }; @@ -502,6 +507,7 @@ void nf_tables_destroy_set(const struct nft_ctx *ctx, struct nft_set *set); * enum nft_set_extensions - set extension type IDs * * @NFT_SET_EXT_KEY: element key + * @NFT_SET_EXT_KEY_END: upper bound element key, for ranges * @NFT_SET_EXT_DATA: mapping data * @NFT_SET_EXT_FLAGS: element flags * @NFT_SET_EXT_TIMEOUT: element timeout @@ -513,6 +519,7 @@ void nf_tables_destroy_set(const struct nft_ctx *ctx, struct nft_set *set); */ enum nft_set_extensions { NFT_SET_EXT_KEY, + NFT_SET_EXT_KEY_END, NFT_SET_EXT_DATA, NFT_SET_EXT_FLAGS, NFT_SET_EXT_TIMEOUT, @@ -606,6 +613,11 @@ static inline struct nft_data *nft_set_ext_key(const struct nft_set_ext *ext) return nft_set_ext(ext, NFT_SET_EXT_KEY); } +static inline struct nft_data *nft_set_ext_key_end(const struct nft_set_ext *ext) +{ + return nft_set_ext(ext, NFT_SET_EXT_KEY_END); +} + static inline struct nft_data *nft_set_ext_data(const struct nft_set_ext *ext) { return nft_set_ext(ext, NFT_SET_EXT_DATA); @@ -655,7 +667,7 @@ static inline struct nft_object **nft_set_ext_obj(const struct nft_set_ext *ext) void *nft_set_elem_init(const struct nft_set *set, const struct nft_set_ext_tmpl *tmpl, - const u32 *key, const u32 *data, + const u32 *key, const u32 *key_end, const u32 *data, u64 timeout, u64 expiration, gfp_t gfp); void nft_set_elem_destroy(const struct nft_set *set, void *elem, bool destroy_expr); diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index 261864736b26..c13106496bd2 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -370,6 +370,7 @@ enum nft_set_elem_flags { * @NFTA_SET_ELEM_USERDATA: user data (NLA_BINARY) * @NFTA_SET_ELEM_EXPR: expression (NLA_NESTED: nft_expr_attributes) * @NFTA_SET_ELEM_OBJREF: stateful object reference (NLA_STRING) + * @NFTA_SET_ELEM_KEY_END: closing key value (NLA_NESTED: nft_data) */ enum nft_set_elem_attributes { NFTA_SET_ELEM_UNSPEC, @@ -382,6 +383,7 @@ enum nft_set_elem_attributes { NFTA_SET_ELEM_EXPR, NFTA_SET_ELEM_PAD, NFTA_SET_ELEM_OBJREF, + NFTA_SET_ELEM_KEY_END, __NFTA_SET_ELEM_MAX }; #define NFTA_SET_ELEM_MAX (__NFTA_SET_ELEM_MAX - 1) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 58e3b285a3d1..5f645a85538a 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4215,6 +4215,9 @@ const struct nft_set_ext_type nft_set_ext_types[] = { .len = sizeof(struct nft_userdata), .align = __alignof__(struct nft_userdata), }, + [NFT_SET_EXT_KEY_END] = { + .align = __alignof__(u32), + }, }; EXPORT_SYMBOL_GPL(nft_set_ext_types); @@ -4233,6 +4236,7 @@ static const struct nla_policy nft_set_elem_policy[NFTA_SET_ELEM_MAX + 1] = { [NFTA_SET_ELEM_EXPR] = { .type = NLA_NESTED }, [NFTA_SET_ELEM_OBJREF] = { .type = NLA_STRING, .len = NFT_OBJ_MAXNAMELEN - 1 }, + [NFTA_SET_ELEM_KEY_END] = { .type = NLA_NESTED }, }; static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX + 1] = { @@ -4282,6 +4286,11 @@ static int nf_tables_fill_setelem(struct sk_buff *skb, NFT_DATA_VALUE, set->klen) < 0) goto nla_put_failure; + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END) && + nft_data_dump(skb, NFTA_SET_ELEM_KEY_END, nft_set_ext_key_end(ext), + NFT_DATA_VALUE, set->klen) < 0) + goto nla_put_failure; + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA) && nft_data_dump(skb, NFTA_SET_ELEM_DATA, nft_set_ext_data(ext), set->dtype == NFT_DATA_VERDICT ? NFT_DATA_VERDICT : NFT_DATA_VALUE, @@ -4569,6 +4578,13 @@ static int nft_get_set_elem(struct nft_ctx *ctx, struct nft_set *set, if (err < 0) return err; + if (nla[NFTA_SET_ELEM_KEY_END]) { + err = nft_setelem_parse_key(ctx, set, &elem.key_end.val, + nla[NFTA_SET_ELEM_KEY_END]); + if (err < 0) + return err; + } + priv = set->ops->get(ctx->net, set, &elem, flags); if (IS_ERR(priv)) return PTR_ERR(priv); @@ -4694,8 +4710,8 @@ static struct nft_trans *nft_trans_elem_alloc(struct nft_ctx *ctx, void *nft_set_elem_init(const struct nft_set *set, const struct nft_set_ext_tmpl *tmpl, - const u32 *key, const u32 *data, - u64 timeout, u64 expiration, gfp_t gfp) + const u32 *key, const u32 *key_end, + const u32 *data, u64 timeout, u64 expiration, gfp_t gfp) { struct nft_set_ext *ext; void *elem; @@ -4708,6 +4724,8 @@ void *nft_set_elem_init(const struct nft_set *set, nft_set_ext_init(ext, tmpl); memcpy(nft_set_ext_key(ext), key, set->klen); + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END)) + memcpy(nft_set_ext_key_end(ext), key_end, set->klen); if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) memcpy(nft_set_ext_data(ext), data, set->dlen); if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) { @@ -4842,9 +4860,19 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, err = nft_setelem_parse_key(ctx, set, &elem.key.val, nla[NFTA_SET_ELEM_KEY]); if (err < 0) - goto err1; + return err; nft_set_ext_add_length(&tmpl, NFT_SET_EXT_KEY, set->klen); + + if (nla[NFTA_SET_ELEM_KEY_END]) { + err = nft_setelem_parse_key(ctx, set, &elem.key_end.val, + nla[NFTA_SET_ELEM_KEY_END]); + if (err < 0) + goto err_parse_key; + + nft_set_ext_add_length(&tmpl, NFT_SET_EXT_KEY_END, set->klen); + } + if (timeout > 0) { nft_set_ext_add(&tmpl, NFT_SET_EXT_EXPIRATION); if (timeout != set->timeout) @@ -4854,14 +4882,14 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, if (nla[NFTA_SET_ELEM_OBJREF] != NULL) { if (!(set->flags & NFT_SET_OBJECT)) { err = -EINVAL; - goto err2; + goto err_parse_key_end; } obj = nft_obj_lookup(ctx->net, ctx->table, nla[NFTA_SET_ELEM_OBJREF], set->objtype, genmask); if (IS_ERR(obj)) { err = PTR_ERR(obj); - goto err2; + goto err_parse_key_end; } nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF); } @@ -4870,11 +4898,11 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, err = nft_data_init(ctx, &data, sizeof(data), &desc, nla[NFTA_SET_ELEM_DATA]); if (err < 0) - goto err2; + goto err_parse_key_end; err = -EINVAL; if (set->dtype != NFT_DATA_VERDICT && desc.len != set->dlen) - goto err3; + goto err_parse_data; dreg = nft_type_to_reg(set->dtype); list_for_each_entry(binding, &set->bindings, list) { @@ -4892,7 +4920,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, &data, desc.type, desc.len); if (err < 0) - goto err3; + goto err_parse_data; if (desc.type == NFT_DATA_VERDICT && (data.verdict.code == NFT_GOTO || @@ -4917,10 +4945,11 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, } err = -ENOMEM; - elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, data.data, + elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, + elem.key_end.val.data, data.data, timeout, expiration, GFP_KERNEL); if (elem.priv == NULL) - goto err3; + goto err_parse_data; ext = nft_set_elem_ext(set, elem.priv); if (flags) @@ -4937,7 +4966,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set); if (trans == NULL) - goto err4; + goto err_trans; ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK; err = set->ops->insert(ctx->net, set, &elem, &ext2); @@ -4948,7 +4977,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF) ^ nft_set_ext_exists(ext2, NFT_SET_EXT_OBJREF)) { err = -EBUSY; - goto err5; + goto err_element_clash; } if ((nft_set_ext_exists(ext, NFT_SET_EXT_DATA) && nft_set_ext_exists(ext2, NFT_SET_EXT_DATA) && @@ -4961,33 +4990,35 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, else if (!(nlmsg_flags & NLM_F_EXCL)) err = 0; } - goto err5; + goto err_element_clash; } if (set->size && !atomic_add_unless(&set->nelems, 1, set->size + set->ndeact)) { err = -ENFILE; - goto err6; + goto err_set_full; } nft_trans_elem(trans) = elem; list_add_tail(&trans->list, &ctx->net->nft.commit_list); return 0; -err6: +err_set_full: set->ops->remove(ctx->net, set, &elem); -err5: +err_element_clash: kfree(trans); -err4: +err_trans: if (obj) obj->use--; kfree(elem.priv); -err3: +err_parse_data: if (nla[NFTA_SET_ELEM_DATA] != NULL) nft_data_release(&data, desc.type); -err2: +err_parse_key_end: + nft_data_release(&elem.key_end.val, NFT_DATA_VALUE); +err_parse_key: nft_data_release(&elem.key.val, NFT_DATA_VALUE); -err1: + return err; } @@ -5112,9 +5143,19 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, nft_set_ext_add_length(&tmpl, NFT_SET_EXT_KEY, set->klen); + if (nla[NFTA_SET_ELEM_KEY_END]) { + err = nft_setelem_parse_key(ctx, set, &elem.key_end.val, + nla[NFTA_SET_ELEM_KEY_END]); + if (err < 0) + return err; + + nft_set_ext_add_length(&tmpl, NFT_SET_EXT_KEY_END, set->klen); + } + err = -ENOMEM; - elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, NULL, 0, - 0, GFP_KERNEL); + elem.priv = nft_set_elem_init(set, &tmpl, elem.key.val.data, + elem.key_end.val.data, NULL, 0, 0, + GFP_KERNEL); if (elem.priv == NULL) goto fail_elem; diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c index 8887295414dc..683785225a3e 100644 --- a/net/netfilter/nft_dynset.c +++ b/net/netfilter/nft_dynset.c @@ -54,7 +54,7 @@ static void *nft_dynset_new(struct nft_set *set, const struct nft_expr *expr, timeout = priv->timeout ? : set->timeout; elem = nft_set_elem_init(set, &priv->tmpl, - ®s->data[priv->sreg_key], + ®s->data[priv->sreg_key], NULL, ®s->data[priv->sreg_data], timeout, 0, GFP_ATOMIC); if (elem == NULL) -- cgit v1.2.3-71-gd317 From f3a2181e16f1dcbf5446ed43f6b5d9f56c459f85 Mon Sep 17 00:00:00 2001 From: Stefano Brivio Date: Wed, 22 Jan 2020 00:17:53 +0100 Subject: netfilter: nf_tables: Support for sets with multiple ranged fields Introduce a new nested netlink attribute, NFTA_SET_DESC_CONCAT, used to specify the length of each field in a set concatenation. This allows set implementations to support concatenation of multiple ranged items, as they can divide the input key into matching data for every single field. Such set implementations would be selected as they specify support for NFT_SET_INTERVAL and allow desc->field_count to be greater than one. Explicitly disallow this for nft_set_rbtree. In order to specify the interval for a set entry, userspace would include in NFTA_SET_DESC_CONCAT attributes field lengths, and pass range endpoints as two separate keys, represented by attributes NFTA_SET_ELEM_KEY and NFTA_SET_ELEM_KEY_END. While at it, export the number of 32-bit registers available for packet matching, as nftables will need this to know the maximum number of field lengths that can be specified. For example, "packets with an IPv4 address between 192.0.2.0 and 192.0.2.42, with destination port between 22 and 25", can be expressed as two concatenated elements: NFTA_SET_ELEM_KEY: 192.0.2.0 . 22 NFTA_SET_ELEM_KEY_END: 192.0.2.42 . 25 and NFTA_SET_DESC_CONCAT attribute would contain: NFTA_LIST_ELEM NFTA_SET_FIELD_LEN: 4 NFTA_LIST_ELEM NFTA_SET_FIELD_LEN: 2 v4: No changes v3: Complete rework, NFTA_SET_DESC_CONCAT instead of NFTA_SET_SUBKEY v2: No changes Signed-off-by: Stefano Brivio Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_tables.h | 8 +++ include/uapi/linux/netfilter/nf_tables.h | 15 ++++++ net/netfilter/nf_tables_api.c | 90 +++++++++++++++++++++++++++++++- net/netfilter/nft_set_rbtree.c | 3 ++ 4 files changed, 115 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 504c0aa93805..4170c033d461 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -264,11 +264,15 @@ struct nft_set_iter { * @klen: key length * @dlen: data length * @size: number of set elements + * @field_len: length of each field in concatenation, bytes + * @field_count: number of concatenated fields in element */ struct nft_set_desc { unsigned int klen; unsigned int dlen; unsigned int size; + u8 field_len[NFT_REG32_COUNT]; + u8 field_count; }; /** @@ -409,6 +413,8 @@ void nft_unregister_set(struct nft_set_type *type); * @dtype: data type (verdict or numeric type defined by userspace) * @objtype: object type (see NFT_OBJECT_* definitions) * @size: maximum set size + * @field_len: length of each field in concatenation, bytes + * @field_count: number of concatenated fields in element * @use: number of rules references to this set * @nelems: number of elements * @ndeact: number of deactivated elements queued for removal @@ -435,6 +441,8 @@ struct nft_set { u32 dtype; u32 objtype; u32 size; + u8 field_len[NFT_REG32_COUNT]; + u8 field_count; u32 use; atomic_t nelems; u32 ndeact; diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index c13106496bd2..065218a20bb7 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -48,6 +48,7 @@ enum nft_registers { #define NFT_REG_SIZE 16 #define NFT_REG32_SIZE 4 +#define NFT_REG32_COUNT (NFT_REG32_15 - NFT_REG32_00 + 1) /** * enum nft_verdicts - nf_tables internal verdicts @@ -301,14 +302,28 @@ enum nft_set_policies { * enum nft_set_desc_attributes - set element description * * @NFTA_SET_DESC_SIZE: number of elements in set (NLA_U32) + * @NFTA_SET_DESC_CONCAT: description of field concatenation (NLA_NESTED) */ enum nft_set_desc_attributes { NFTA_SET_DESC_UNSPEC, NFTA_SET_DESC_SIZE, + NFTA_SET_DESC_CONCAT, __NFTA_SET_DESC_MAX }; #define NFTA_SET_DESC_MAX (__NFTA_SET_DESC_MAX - 1) +/** + * enum nft_set_field_attributes - attributes of concatenated fields + * + * @NFTA_SET_FIELD_LEN: length of single field, in bits (NLA_U32) + */ +enum nft_set_field_attributes { + NFTA_SET_FIELD_UNSPEC, + NFTA_SET_FIELD_LEN, + __NFTA_SET_FIELD_MAX +}; +#define NFTA_SET_FIELD_MAX (__NFTA_SET_FIELD_MAX - 1) + /** * enum nft_set_attributes - nf_tables set netlink attributes * diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 5f645a85538a..d1318bdf49ca 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -3391,6 +3391,7 @@ static const struct nla_policy nft_set_policy[NFTA_SET_MAX + 1] = { static const struct nla_policy nft_set_desc_policy[NFTA_SET_DESC_MAX + 1] = { [NFTA_SET_DESC_SIZE] = { .type = NLA_U32 }, + [NFTA_SET_DESC_CONCAT] = { .type = NLA_NESTED }, }; static int nft_ctx_init_from_setattr(struct nft_ctx *ctx, struct net *net, @@ -3557,6 +3558,33 @@ static __be64 nf_jiffies64_to_msecs(u64 input) return cpu_to_be64(jiffies64_to_msecs(input)); } +static int nf_tables_fill_set_concat(struct sk_buff *skb, + const struct nft_set *set) +{ + struct nlattr *concat, *field; + int i; + + concat = nla_nest_start_noflag(skb, NFTA_SET_DESC_CONCAT); + if (!concat) + return -ENOMEM; + + for (i = 0; i < set->field_count; i++) { + field = nla_nest_start_noflag(skb, NFTA_LIST_ELEM); + if (!field) + return -ENOMEM; + + if (nla_put_be32(skb, NFTA_SET_FIELD_LEN, + htonl(set->field_len[i]))) + return -ENOMEM; + + nla_nest_end(skb, field); + } + + nla_nest_end(skb, concat); + + return 0; +} + static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx, const struct nft_set *set, u16 event, u16 flags) { @@ -3620,11 +3648,17 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx, goto nla_put_failure; desc = nla_nest_start_noflag(skb, NFTA_SET_DESC); + if (desc == NULL) goto nla_put_failure; if (set->size && nla_put_be32(skb, NFTA_SET_DESC_SIZE, htonl(set->size))) goto nla_put_failure; + + if (set->field_count > 1 && + nf_tables_fill_set_concat(skb, set)) + goto nla_put_failure; + nla_nest_end(skb, desc); nlmsg_end(skb, nlh); @@ -3797,6 +3831,53 @@ err: return err; } +static const struct nla_policy nft_concat_policy[NFTA_SET_FIELD_MAX + 1] = { + [NFTA_SET_FIELD_LEN] = { .type = NLA_U32 }, +}; + +static int nft_set_desc_concat_parse(const struct nlattr *attr, + struct nft_set_desc *desc) +{ + struct nlattr *tb[NFTA_SET_FIELD_MAX + 1]; + u32 len; + int err; + + err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr, + nft_concat_policy, NULL); + if (err < 0) + return err; + + if (!tb[NFTA_SET_FIELD_LEN]) + return -EINVAL; + + len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN])); + + if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT) + return -E2BIG; + + desc->field_len[desc->field_count++] = len; + + return 0; +} + +static int nft_set_desc_concat(struct nft_set_desc *desc, + const struct nlattr *nla) +{ + struct nlattr *attr; + int rem, err; + + nla_for_each_nested(attr, nla, rem) { + if (nla_type(attr) != NFTA_LIST_ELEM) + return -EINVAL; + + err = nft_set_desc_concat_parse(attr, desc); + if (err < 0) + return err; + } + + return 0; +} + static int nf_tables_set_desc_parse(struct nft_set_desc *desc, const struct nlattr *nla) { @@ -3810,8 +3891,10 @@ static int nf_tables_set_desc_parse(struct nft_set_desc *desc, if (da[NFTA_SET_DESC_SIZE] != NULL) desc->size = ntohl(nla_get_be32(da[NFTA_SET_DESC_SIZE])); + if (da[NFTA_SET_DESC_CONCAT]) + err = nft_set_desc_concat(desc, da[NFTA_SET_DESC_CONCAT]); - return 0; + return err; } static int nf_tables_newset(struct net *net, struct sock *nlsk, @@ -3834,6 +3917,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk, unsigned char *udata; u16 udlen; int err; + int i; if (nla[NFTA_SET_TABLE] == NULL || nla[NFTA_SET_NAME] == NULL || @@ -4012,6 +4096,10 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk, set->gc_int = gc_int; set->handle = nf_tables_alloc_handle(table); + set->field_count = desc.field_count; + for (i = 0; i < desc.field_count; i++) + set->field_len[i] = desc.field_len[i]; + err = ops->init(set, &desc, nla); if (err < 0) goto err3; diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c index a9f804f7a04a..5000b938ab1e 100644 --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -466,6 +466,9 @@ static void nft_rbtree_destroy(const struct nft_set *set) static bool nft_rbtree_estimate(const struct nft_set_desc *desc, u32 features, struct nft_set_estimate *est) { + if (desc->field_count > 1) + return false; + if (desc->size) est->size = sizeof(struct nft_rbtree) + desc->size * sizeof(struct nft_rbtree_elem); -- cgit v1.2.3-71-gd317 From 2092767168f0681aa03727448b801600a364c013 Mon Sep 17 00:00:00 2001 From: Stefano Brivio Date: Wed, 22 Jan 2020 00:17:54 +0100 Subject: bitmap: Introduce bitmap_cut(): cut bits and shift remaining The new bitmap function bitmap_cut() copies bits from source to destination by removing the region specified by parameters first and cut, and remapping the bits above the cut region by right shifting them. Signed-off-by: Stefano Brivio Signed-off-by: Pablo Neira Ayuso --- include/linux/bitmap.h | 4 +++ lib/bitmap.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) (limited to 'include') diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index ff335b22f23c..f0f3a9fffa6a 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -53,6 +53,7 @@ * bitmap_find_next_zero_area_off(buf, len, pos, n, mask) as above * bitmap_shift_right(dst, src, n, nbits) *dst = *src >> n * bitmap_shift_left(dst, src, n, nbits) *dst = *src << n + * bitmap_cut(dst, src, first, n, nbits) Cut n bits from first, copy rest * bitmap_replace(dst, old, new, mask, nbits) *dst = (*old & ~(*mask)) | (*new & *mask) * bitmap_remap(dst, src, old, new, nbits) *dst = map(old, new)(src) * bitmap_bitremap(oldbit, old, new, nbits) newbit = map(old, new)(oldbit) @@ -133,6 +134,9 @@ extern void __bitmap_shift_right(unsigned long *dst, const unsigned long *src, unsigned int shift, unsigned int nbits); extern void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, unsigned int shift, unsigned int nbits); +extern void bitmap_cut(unsigned long *dst, const unsigned long *src, + unsigned int first, unsigned int cut, + unsigned int nbits); extern int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); extern void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, diff --git a/lib/bitmap.c b/lib/bitmap.c index 4250519d7d1c..6e175fbd69a9 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -168,6 +168,72 @@ void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, } EXPORT_SYMBOL(__bitmap_shift_left); +/** + * bitmap_cut() - remove bit region from bitmap and right shift remaining bits + * @dst: destination bitmap, might overlap with src + * @src: source bitmap + * @first: start bit of region to be removed + * @cut: number of bits to remove + * @nbits: bitmap size, in bits + * + * Set the n-th bit of @dst iff the n-th bit of @src is set and + * n is less than @first, or the m-th bit of @src is set for any + * m such that @first <= n < nbits, and m = n + @cut. + * + * In pictures, example for a big-endian 32-bit architecture: + * + * @src: + * 31 63 + * | | + * 10000000 11000001 11110010 00010101 10000000 11000001 01110010 00010101 + * | | | | + * 16 14 0 32 + * + * if @cut is 3, and @first is 14, bits 14-16 in @src are cut and @dst is: + * + * 31 63 + * | | + * 10110000 00011000 00110010 00010101 00010000 00011000 00101110 01000010 + * | | | + * 14 (bit 17 0 32 + * from @src) + * + * Note that @dst and @src might overlap partially or entirely. + * + * This is implemented in the obvious way, with a shift and carry + * step for each moved bit. Optimisation is left as an exercise + * for the compiler. + */ +void bitmap_cut(unsigned long *dst, const unsigned long *src, + unsigned int first, unsigned int cut, unsigned int nbits) +{ + unsigned int len = BITS_TO_LONGS(nbits); + unsigned long keep = 0, carry; + int i; + + memmove(dst, src, len * sizeof(*dst)); + + if (first % BITS_PER_LONG) { + keep = src[first / BITS_PER_LONG] & + (~0UL >> (BITS_PER_LONG - first % BITS_PER_LONG)); + } + + while (cut--) { + for (i = first / BITS_PER_LONG; i < len; i++) { + if (i < len - 1) + carry = dst[i + 1] & 1UL; + else + carry = 0; + + dst[i] = (dst[i] >> 1) | (carry << (BITS_PER_LONG - 1)); + } + } + + dst[first / BITS_PER_LONG] &= ~0UL << (first % BITS_PER_LONG); + dst[first / BITS_PER_LONG] |= keep; +} +EXPORT_SYMBOL(bitmap_cut); + int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int bits) { -- cgit v1.2.3-71-gd317 From 3c4287f62044a90e73a561aa05fc46e62da173da Mon Sep 17 00:00:00 2001 From: Stefano Brivio Date: Wed, 22 Jan 2020 00:17:55 +0100 Subject: nf_tables: Add set type for arbitrary concatenation of ranges This new set type allows for intervals in concatenated fields, which are expressed in the usual way, that is, simple byte concatenation with padding to 32 bits for single fields, and given as ranges by specifying start and end elements containing, each, the full concatenation of start and end values for the single fields. Ranges are expanded to composing netmasks, for each field: these are inserted as rules in per-field lookup tables. Bits to be classified are divided in 4-bit groups, and for each group, the lookup table contains 4^2 buckets, representing all the possible values of a bit group. This approach was inspired by the Grouper algorithm: http://www.cse.usf.edu/~ligatti/projects/grouper/ Matching is performed by a sequence of AND operations between bucket values, with buckets selected according to the value of packet bits, for each group. The result of this sequence tells us which rules matched for a given field. In order to concatenate several ranged fields, per-field rules are mapped using mapping arrays, one per field, that specify which rules should be considered while matching the next field. The mapping array for the last field contains a reference to the element originally inserted. The notes in nft_set_pipapo.c cover the algorithm in deeper detail. A pure hash-based approach is of no use here, as ranges need to be classified. An implementation based on "proxying" the existing red-black tree set type, creating a tree for each field, was considered, but deemed impractical due to the fact that elements would need to be shared between trees, at least as long as we want to keep UAPI changes to a minimum. A stand-alone implementation of this algorithm is available at: https://pipapo.lameexcu.se together with notes about possible future optimisations (in pipapo.c). This algorithm was designed with data locality in mind, and can be highly optimised for SIMD instruction sets, as the bulk of the matching work is done with repetitive, simple bitwise operations. At this point, without further optimisations, nft_concat_range.sh reports, for one AMD Epyc 7351 thread (2.9GHz, 512 KiB L1D$, 8 MiB L2$): TEST: performance net,port [ OK ] baseline (drop from netdev hook): 10190076pps baseline hash (non-ranged entries): 6179564pps baseline rbtree (match on first field only): 2950341pps set with 1000 full, ranged entries: 2304165pps port,net [ OK ] baseline (drop from netdev hook): 10143615pps baseline hash (non-ranged entries): 6135776pps baseline rbtree (match on first field only): 4311934pps set with 100 full, ranged entries: 4131471pps net6,port [ OK ] baseline (drop from netdev hook): 9730404pps baseline hash (non-ranged entries): 4809557pps baseline rbtree (match on first field only): 1501699pps set with 1000 full, ranged entries: 1092557pps port,proto [ OK ] baseline (drop from netdev hook): 10812426pps baseline hash (non-ranged entries): 6929353pps baseline rbtree (match on first field only): 3027105pps set with 30000 full, ranged entries: 284147pps net6,port,mac [ OK ] baseline (drop from netdev hook): 9660114pps baseline hash (non-ranged entries): 3778877pps baseline rbtree (match on first field only): 3179379pps set with 10 full, ranged entries: 2082880pps net6,port,mac,proto [ OK ] baseline (drop from netdev hook): 9718324pps baseline hash (non-ranged entries): 3799021pps baseline rbtree (match on first field only): 1506689pps set with 1000 full, ranged entries: 783810pps net,mac [ OK ] baseline (drop from netdev hook): 10190029pps baseline hash (non-ranged entries): 5172218pps baseline rbtree (match on first field only): 2946863pps set with 1000 full, ranged entries: 1279122pps v4: - fix build for 32-bit architectures: 64-bit division needs div_u64() (kbuild test robot ) v3: - rework interface for field length specification, NFT_SET_SUBKEY disappears and information is stored in description - remove scratch area to store closing element of ranges, as elements now come with an actual attribute to specify the upper range limit (Pablo Neira Ayuso) - also remove pointer to 'start' element from mapping table, closing key is now accessible via extension data - use bytes right away instead of bits for field lengths, this way we can also double the inner loop of the lookup function to take care of upper and lower bits in a single iteration (minor performance improvement) - make it clearer that set operations are actually atomic API-wise, but we can't e.g. implement flush() as one-shot action - fix type for 'dup' in nft_pipapo_insert(), check for duplicates only in the next generation, and in general take care of differentiating generation mask cases depending on the operation (Pablo Neira Ayuso) - report C implementation matching rate in commit message, so that AVX2 implementation can be compared (Pablo Neira Ayuso) v2: - protect access to scratch maps in nft_pipapo_lookup() with local_bh_disable/enable() (Florian Westphal) - drop rcu_read_lock/unlock() from nft_pipapo_lookup(), it's already implied (Florian Westphal) - explain why partial allocation failures don't need handling in pipapo_realloc_scratch(), rename 'm' to clone and update related kerneldoc to make it clear we're not operating on the live copy (Florian Westphal) - add expicit check for priv->start_elem in nft_pipapo_insert() to avoid ending up in nft_pipapo_walk() with a NULL start element, and also zero it out in every operation that might make it invalid, so that insertion doesn't proceed with an invalid element (Florian Westphal) Signed-off-by: Stefano Brivio Signed-off-by: Pablo Neira Ayuso --- include/net/netfilter/nf_tables_core.h | 1 + net/netfilter/Makefile | 3 +- net/netfilter/nf_tables_set_core.c | 2 + net/netfilter/nft_set_pipapo.c | 2102 ++++++++++++++++++++++++++++++++ 4 files changed, 2107 insertions(+), 1 deletion(-) create mode 100644 net/netfilter/nft_set_pipapo.c (limited to 'include') diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h index 2656155b4069..29e7e1021267 100644 --- a/include/net/netfilter/nf_tables_core.h +++ b/include/net/netfilter/nf_tables_core.h @@ -74,6 +74,7 @@ extern struct nft_set_type nft_set_hash_type; extern struct nft_set_type nft_set_hash_fast_type; extern struct nft_set_type nft_set_rbtree_type; extern struct nft_set_type nft_set_bitmap_type; +extern struct nft_set_type nft_set_pipapo_type; struct nft_expr; struct nft_regs; diff --git a/net/netfilter/Makefile b/net/netfilter/Makefile index 5e9b2eb24349..3f572e5a975e 100644 --- a/net/netfilter/Makefile +++ b/net/netfilter/Makefile @@ -81,7 +81,8 @@ nf_tables-objs := nf_tables_core.o nf_tables_api.o nft_chain_filter.o \ nft_chain_route.o nf_tables_offload.o nf_tables_set-objs := nf_tables_set_core.o \ - nft_set_hash.o nft_set_bitmap.o nft_set_rbtree.o + nft_set_hash.o nft_set_bitmap.o nft_set_rbtree.o \ + nft_set_pipapo.o obj-$(CONFIG_NF_TABLES) += nf_tables.o obj-$(CONFIG_NF_TABLES_SET) += nf_tables_set.o diff --git a/net/netfilter/nf_tables_set_core.c b/net/netfilter/nf_tables_set_core.c index a9fce8d10051..586b621007eb 100644 --- a/net/netfilter/nf_tables_set_core.c +++ b/net/netfilter/nf_tables_set_core.c @@ -9,12 +9,14 @@ static int __init nf_tables_set_module_init(void) nft_register_set(&nft_set_rhash_type); nft_register_set(&nft_set_bitmap_type); nft_register_set(&nft_set_rbtree_type); + nft_register_set(&nft_set_pipapo_type); return 0; } static void __exit nf_tables_set_module_exit(void) { + nft_unregister_set(&nft_set_pipapo_type); nft_unregister_set(&nft_set_rbtree_type); nft_unregister_set(&nft_set_bitmap_type); nft_unregister_set(&nft_set_rhash_type); diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c new file mode 100644 index 000000000000..f0cb1e13af50 --- /dev/null +++ b/net/netfilter/nft_set_pipapo.c @@ -0,0 +1,2102 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* PIPAPO: PIle PAcket POlicies: set for arbitrary concatenations of ranges + * + * Copyright (c) 2019-2020 Red Hat GmbH + * + * Author: Stefano Brivio + */ + +/** + * DOC: Theory of Operation + * + * + * Problem + * ------- + * + * Match packet bytes against entries composed of ranged or non-ranged packet + * field specifiers, mapping them to arbitrary references. For example: + * + * :: + * + * --- fields ---> + * | [net],[port],[net]... => [reference] + * entries [net],[port],[net]... => [reference] + * | [net],[port],[net]... => [reference] + * V ... + * + * where [net] fields can be IP ranges or netmasks, and [port] fields are port + * ranges. Arbitrary packet fields can be matched. + * + * + * Algorithm Overview + * ------------------ + * + * This algorithm is loosely inspired by [Ligatti 2010], and fundamentally + * relies on the consideration that every contiguous range in a space of b bits + * can be converted into b * 2 netmasks, from Theorem 3 in [Rottenstreich 2010], + * as also illustrated in Section 9 of [Kogan 2014]. + * + * Classification against a number of entries, that require matching given bits + * of a packet field, is performed by grouping those bits in sets of arbitrary + * size, and classifying packet bits one group at a time. + * + * Example: + * to match the source port (16 bits) of a packet, we can divide those 16 bits + * in 4 groups of 4 bits each. Given the entry: + * 0000 0001 0101 1001 + * and a packet with source port: + * 0000 0001 1010 1001 + * first and second groups match, but the third doesn't. We conclude that the + * packet doesn't match the given entry. + * + * Translate the set to a sequence of lookup tables, one per field. Each table + * has two dimensions: bit groups to be matched for a single packet field, and + * all the possible values of said groups (buckets). Input entries are + * represented as one or more rules, depending on the number of composing + * netmasks for the given field specifier, and a group match is indicated as a + * set bit, with number corresponding to the rule index, in all the buckets + * whose value matches the entry for a given group. + * + * Rules are mapped between fields through an array of x, n pairs, with each + * item mapping a matched rule to one or more rules. The position of the pair in + * the array indicates the matched rule to be mapped to the next field, x + * indicates the first rule index in the next field, and n the amount of + * next-field rules the current rule maps to. + * + * The mapping array for the last field maps to the desired references. + * + * To match, we perform table lookups using the values of grouped packet bits, + * and use a sequence of bitwise operations to progressively evaluate rule + * matching. + * + * A stand-alone, reference implementation, also including notes about possible + * future optimisations, is available at: + * https://pipapo.lameexcu.se/ + * + * Insertion + * --------- + * + * - For each packet field: + * + * - divide the b packet bits we want to classify into groups of size t, + * obtaining ceil(b / t) groups + * + * Example: match on destination IP address, with t = 4: 32 bits, 8 groups + * of 4 bits each + * + * - allocate a lookup table with one column ("bucket") for each possible + * value of a group, and with one row for each group + * + * Example: 8 groups, 2^4 buckets: + * + * :: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 + * 1 + * 2 + * 3 + * 4 + * 5 + * 6 + * 7 + * + * - map the bits we want to classify for the current field, for a given + * entry, to a single rule for non-ranged and netmask set items, and to one + * or multiple rules for ranges. Ranges are expanded to composing netmasks + * by pipapo_expand(). + * + * Example: 2 entries, 10.0.0.5:1024 and 192.168.1.0-192.168.2.1:2048 + * - rule #0: 10.0.0.5 + * - rule #1: 192.168.1.0/24 + * - rule #2: 192.168.2.0/31 + * + * - insert references to the rules in the lookup table, selecting buckets + * according to bit values of a rule in the given group. This is done by + * pipapo_insert(). + * + * Example: given: + * - rule #0: 10.0.0.5 mapping to buckets + * < 0 10 0 0 0 0 0 5 > + * - rule #1: 192.168.1.0/24 mapping to buckets + * < 12 0 10 8 0 1 < 0..15 > < 0..15 > > + * - rule #2: 192.168.2.0/31 mapping to buckets + * < 12 0 10 8 0 2 0 < 0..1 > > + * + * these bits are set in the lookup table: + * + * :: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0 1,2 + * 1 1,2 0 + * 2 0 1,2 + * 3 0 1,2 + * 4 0,1,2 + * 5 0 1 2 + * 6 0,1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * 7 1,2 1,2 1 1 1 0,1 1 1 1 1 1 1 1 1 1 1 + * + * - if this is not the last field in the set, fill a mapping array that maps + * rules from the lookup table to rules belonging to the same entry in + * the next lookup table, done by pipapo_map(). + * + * Note that as rules map to contiguous ranges of rules, given how netmask + * expansion and insertion is performed, &union nft_pipapo_map_bucket stores + * this information as pairs of first rule index, rule count. + * + * Example: 2 entries, 10.0.0.5:1024 and 192.168.1.0-192.168.2.1:2048, + * given lookup table #0 for field 0 (see example above): + * + * :: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0 1,2 + * 1 1,2 0 + * 2 0 1,2 + * 3 0 1,2 + * 4 0,1,2 + * 5 0 1 2 + * 6 0,1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * 7 1,2 1,2 1 1 1 0,1 1 1 1 1 1 1 1 1 1 1 + * + * and lookup table #1 for field 1 with: + * - rule #0: 1024 mapping to buckets + * < 0 0 4 0 > + * - rule #1: 2048 mapping to buckets + * < 0 0 5 0 > + * + * :: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0,1 + * 1 0,1 + * 2 0 1 + * 3 0,1 + * + * we need to map rules for 10.0.0.5 in lookup table #0 (rule #0) to 1024 + * in lookup table #1 (rule #0) and rules for 192.168.1.0-192.168.2.1 + * (rules #1, #2) to 2048 in lookup table #2 (rule #1): + * + * :: + * + * rule indices in current field: 0 1 2 + * map to rules in next field: 0 1 1 + * + * - if this is the last field in the set, fill a mapping array that maps + * rules from the last lookup table to element pointers, also done by + * pipapo_map(). + * + * Note that, in this implementation, we have two elements (start, end) for + * each entry. The pointer to the end element is stored in this array, and + * the pointer to the start element is linked from it. + * + * Example: entry 10.0.0.5:1024 has a corresponding &struct nft_pipapo_elem + * pointer, 0x66, and element for 192.168.1.0-192.168.2.1:2048 is at 0x42. + * From the rules of lookup table #1 as mapped above: + * + * :: + * + * rule indices in last field: 0 1 + * map to elements: 0x42 0x66 + * + * + * Matching + * -------- + * + * We use a result bitmap, with the size of a single lookup table bucket, to + * represent the matching state that applies at every algorithm step. This is + * done by pipapo_lookup(). + * + * - For each packet field: + * + * - start with an all-ones result bitmap (res_map in pipapo_lookup()) + * + * - perform a lookup into the table corresponding to the current field, + * for each group, and at every group, AND the current result bitmap with + * the value from the lookup table bucket + * + * :: + * + * Example: 192.168.1.5 < 12 0 10 8 0 1 0 5 >, with lookup table from + * insertion examples. + * Lookup table buckets are at least 3 bits wide, we'll assume 8 bits for + * convenience in this example. Initial result bitmap is 0xff, the steps + * below show the value of the result bitmap after each group is processed: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0 1,2 + * result bitmap is now: 0xff & 0x6 [bucket 12] = 0x6 + * + * 1 1,2 0 + * result bitmap is now: 0x6 & 0x6 [bucket 0] = 0x6 + * + * 2 0 1,2 + * result bitmap is now: 0x6 & 0x6 [bucket 10] = 0x6 + * + * 3 0 1,2 + * result bitmap is now: 0x6 & 0x6 [bucket 8] = 0x6 + * + * 4 0,1,2 + * result bitmap is now: 0x6 & 0x7 [bucket 0] = 0x6 + * + * 5 0 1 2 + * result bitmap is now: 0x6 & 0x2 [bucket 1] = 0x2 + * + * 6 0,1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * result bitmap is now: 0x2 & 0x7 [bucket 0] = 0x2 + * + * 7 1,2 1,2 1 1 1 0,1 1 1 1 1 1 1 1 1 1 1 + * final result bitmap for this field is: 0x2 & 0x3 [bucket 5] = 0x2 + * + * - at the next field, start with a new, all-zeroes result bitmap. For each + * bit set in the previous result bitmap, fill the new result bitmap + * (fill_map in pipapo_lookup()) with the rule indices from the + * corresponding buckets of the mapping field for this field, done by + * pipapo_refill() + * + * Example: with mapping table from insertion examples, with the current + * result bitmap from the previous example, 0x02: + * + * :: + * + * rule indices in current field: 0 1 2 + * map to rules in next field: 0 1 1 + * + * the new result bitmap will be 0x02: rule 1 was set, and rule 1 will be + * set. + * + * We can now extend this example to cover the second iteration of the step + * above (lookup and AND bitmap): assuming the port field is + * 2048 < 0 0 5 0 >, with starting result bitmap 0x2, and lookup table + * for "port" field from pre-computation example: + * + * :: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0,1 + * 1 0,1 + * 2 0 1 + * 3 0,1 + * + * operations are: 0x2 & 0x3 [bucket 0] & 0x3 [bucket 0] & 0x2 [bucket 5] + * & 0x3 [bucket 0], resulting bitmap is 0x2. + * + * - if this is the last field in the set, look up the value from the mapping + * array corresponding to the final result bitmap + * + * Example: 0x2 resulting bitmap from 192.168.1.5:2048, mapping array for + * last field from insertion example: + * + * :: + * + * rule indices in last field: 0 1 + * map to elements: 0x42 0x66 + * + * the matching element is at 0x42. + * + * + * References + * ---------- + * + * [Ligatti 2010] + * A Packet-classification Algorithm for Arbitrary Bitmask Rules, with + * Automatic Time-space Tradeoffs + * Jay Ligatti, Josh Kuhn, and Chris Gage. + * Proceedings of the IEEE International Conference on Computer + * Communication Networks (ICCCN), August 2010. + * http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf + * + * [Rottenstreich 2010] + * Worst-Case TCAM Rule Expansion + * Ori Rottenstreich and Isaac Keslassy. + * 2010 Proceedings IEEE INFOCOM, San Diego, CA, 2010. + * http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.4592&rep=rep1&type=pdf + * + * [Kogan 2014] + * SAX-PAC (Scalable And eXpressive PAcket Classification) + * Kirill Kogan, Sergey Nikolenko, Ori Rottenstreich, William Culhane, + * and Patrick Eugster. + * Proceedings of the 2014 ACM conference on SIGCOMM, August 2014. + * http://www.sigcomm.org/sites/default/files/ccr/papers/2014/August/2619239-2626294.pdf + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include /* For the maximum length of a field */ +#include +#include + +/* Count of concatenated fields depends on count of 32-bit nftables registers */ +#define NFT_PIPAPO_MAX_FIELDS NFT_REG32_COUNT + +/* Largest supported field size */ +#define NFT_PIPAPO_MAX_BYTES (sizeof(struct in6_addr)) +#define NFT_PIPAPO_MAX_BITS (NFT_PIPAPO_MAX_BYTES * BITS_PER_BYTE) + +/* Number of bits to be grouped together in lookup table buckets, arbitrary */ +#define NFT_PIPAPO_GROUP_BITS 4 +#define NFT_PIPAPO_GROUPS_PER_BYTE (BITS_PER_BYTE / NFT_PIPAPO_GROUP_BITS) + +/* Fields are padded to 32 bits in input registers */ +#define NFT_PIPAPO_GROUPS_PADDED_SIZE(x) \ + (round_up((x) / NFT_PIPAPO_GROUPS_PER_BYTE, sizeof(u32))) +#define NFT_PIPAPO_GROUPS_PADDING(x) \ + (NFT_PIPAPO_GROUPS_PADDED_SIZE((x)) - (x) / NFT_PIPAPO_GROUPS_PER_BYTE) + +/* Number of buckets, given by 2 ^ n, with n grouped bits */ +#define NFT_PIPAPO_BUCKETS (1 << NFT_PIPAPO_GROUP_BITS) + +/* Each n-bit range maps to up to n * 2 rules */ +#define NFT_PIPAPO_MAP_NBITS (const_ilog2(NFT_PIPAPO_MAX_BITS * 2)) + +/* Use the rest of mapping table buckets for rule indices, but it makes no sense + * to exceed 32 bits + */ +#if BITS_PER_LONG == 64 +#define NFT_PIPAPO_MAP_TOBITS 32 +#else +#define NFT_PIPAPO_MAP_TOBITS (BITS_PER_LONG - NFT_PIPAPO_MAP_NBITS) +#endif + +/* ...which gives us the highest allowed index for a rule */ +#define NFT_PIPAPO_RULE0_MAX ((1UL << (NFT_PIPAPO_MAP_TOBITS - 1)) \ + - (1UL << NFT_PIPAPO_MAP_NBITS)) + +#define nft_pipapo_for_each_field(field, index, match) \ + for ((field) = (match)->f, (index) = 0; \ + (index) < (match)->field_count; \ + (index)++, (field)++) + +/** + * union nft_pipapo_map_bucket - Bucket of mapping table + * @to: First rule number (in next field) this rule maps to + * @n: Number of rules (in next field) this rule maps to + * @e: If there's no next field, pointer to element this rule maps to + */ +union nft_pipapo_map_bucket { + struct { +#if BITS_PER_LONG == 64 + static_assert(NFT_PIPAPO_MAP_TOBITS <= 32); + u32 to; + + static_assert(NFT_PIPAPO_MAP_NBITS <= 32); + u32 n; +#else + unsigned long to:NFT_PIPAPO_MAP_TOBITS; + unsigned long n:NFT_PIPAPO_MAP_NBITS; +#endif + }; + struct nft_pipapo_elem *e; +}; + +/** + * struct nft_pipapo_field - Lookup, mapping tables and related data for a field + * @groups: Amount of 4-bit groups + * @rules: Number of inserted rules + * @bsize: Size of each bucket in lookup table, in longs + * @lt: Lookup table: 'groups' rows of NFT_PIPAPO_BUCKETS buckets + * @mt: Mapping table: one bucket per rule + */ +struct nft_pipapo_field { + int groups; + unsigned long rules; + size_t bsize; + unsigned long *lt; + union nft_pipapo_map_bucket *mt; +}; + +/** + * struct nft_pipapo_match - Data used for lookup and matching + * @field_count Amount of fields in set + * @scratch: Preallocated per-CPU maps for partial matching results + * @bsize_max: Maximum lookup table bucket size of all fields, in longs + * @rcu Matching data is swapped on commits + * @f: Fields, with lookup and mapping tables + */ +struct nft_pipapo_match { + int field_count; + unsigned long * __percpu *scratch; + size_t bsize_max; + struct rcu_head rcu; + struct nft_pipapo_field f[0]; +}; + +/* Current working bitmap index, toggled between field matches */ +static DEFINE_PER_CPU(bool, nft_pipapo_scratch_index); + +/** + * struct nft_pipapo - Representation of a set + * @match: Currently in-use matching data + * @clone: Copy where pending insertions and deletions are kept + * @groups: Total amount of 4-bit groups for fields in this set + * @width: Total bytes to be matched for one packet, including padding + * @dirty: Working copy has pending insertions or deletions + * @last_gc: Timestamp of last garbage collection run, jiffies + */ +struct nft_pipapo { + struct nft_pipapo_match __rcu *match; + struct nft_pipapo_match *clone; + int groups; + int width; + bool dirty; + unsigned long last_gc; +}; + +struct nft_pipapo_elem; + +/** + * struct nft_pipapo_elem - API-facing representation of single set element + * @ext: nftables API extensions + */ +struct nft_pipapo_elem { + struct nft_set_ext ext; +}; + +/** + * pipapo_refill() - For each set bit, set bits from selected mapping table item + * @map: Bitmap to be scanned for set bits + * @len: Length of bitmap in longs + * @rules: Number of rules in field + * @dst: Destination bitmap + * @mt: Mapping table containing bit set specifiers + * @match_only: Find a single bit and return, don't fill + * + * Iteration over set bits with __builtin_ctzl(): Daniel Lemire, public domain. + * + * For each bit set in map, select the bucket from mapping table with index + * corresponding to the position of the bit set. Use start bit and amount of + * bits specified in bucket to fill region in dst. + * + * Return: -1 on no match, bit position on 'match_only', 0 otherwise. + */ +static int pipapo_refill(unsigned long *map, int len, int rules, + unsigned long *dst, union nft_pipapo_map_bucket *mt, + bool match_only) +{ + unsigned long bitset; + int k, ret = -1; + + for (k = 0; k < len; k++) { + bitset = map[k]; + while (bitset) { + unsigned long t = bitset & -bitset; + int r = __builtin_ctzl(bitset); + int i = k * BITS_PER_LONG + r; + + if (unlikely(i >= rules)) { + map[k] = 0; + return -1; + } + + if (unlikely(match_only)) { + bitmap_clear(map, i, 1); + return i; + } + + ret = 0; + + bitmap_set(dst, mt[i].to, mt[i].n); + + bitset ^= t; + } + map[k] = 0; + } + + return ret; +} + +/** + * nft_pipapo_lookup() - Lookup function + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * @ext: nftables API extension pointer, filled with matching reference + * + * For more details, see DOC: Theory of Operation. + * + * Return: true on match, false otherwise. + */ +static bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set, + const u32 *key, const struct nft_set_ext **ext) +{ + struct nft_pipapo *priv = nft_set_priv(set); + unsigned long *res_map, *fill_map; + u8 genmask = nft_genmask_cur(net); + const u8 *rp = (const u8 *)key; + struct nft_pipapo_match *m; + struct nft_pipapo_field *f; + bool map_index; + int i; + + local_bh_disable(); + + map_index = raw_cpu_read(nft_pipapo_scratch_index); + + m = rcu_dereference(priv->match); + + if (unlikely(!m || !*raw_cpu_ptr(m->scratch))) + goto out; + + res_map = *raw_cpu_ptr(m->scratch) + (map_index ? m->bsize_max : 0); + fill_map = *raw_cpu_ptr(m->scratch) + (map_index ? 0 : m->bsize_max); + + memset(res_map, 0xff, m->bsize_max * sizeof(*res_map)); + + nft_pipapo_for_each_field(f, i, m) { + bool last = i == m->field_count - 1; + unsigned long *lt = f->lt; + int b, group; + + /* For each 4-bit group: select lookup table bucket depending on + * packet bytes value, then AND bucket value + */ + for (group = 0; group < f->groups; group += 2) { + u8 v; + + v = *rp >> 4; + __bitmap_and(res_map, res_map, lt + v * f->bsize, + f->bsize * BITS_PER_LONG); + lt += f->bsize * NFT_PIPAPO_BUCKETS; + + v = *rp & 0x0f; + rp++; + __bitmap_and(res_map, res_map, lt + v * f->bsize, + f->bsize * BITS_PER_LONG); + lt += f->bsize * NFT_PIPAPO_BUCKETS; + } + + /* Now populate the bitmap for the next field, unless this is + * the last field, in which case return the matched 'ext' + * pointer if any. + * + * Now res_map contains the matching bitmap, and fill_map is the + * bitmap for the next field. + */ +next_match: + b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt, + last); + if (b < 0) { + raw_cpu_write(nft_pipapo_scratch_index, map_index); + local_bh_enable(); + + return false; + } + + if (last) { + *ext = &f->mt[b].e->ext; + if (unlikely(nft_set_elem_expired(*ext) || + !nft_set_elem_active(*ext, genmask))) + goto next_match; + + /* Last field: we're just returning the key without + * filling the initial bitmap for the next field, so the + * current inactive bitmap is clean and can be reused as + * *next* bitmap (not initial) for the next packet. + */ + raw_cpu_write(nft_pipapo_scratch_index, map_index); + local_bh_enable(); + + return true; + } + + /* Swap bitmap indices: res_map is the initial bitmap for the + * next field, and fill_map is guaranteed to be all-zeroes at + * this point. + */ + map_index = !map_index; + swap(res_map, fill_map); + + rp += NFT_PIPAPO_GROUPS_PADDING(f->groups); + } + +out: + local_bh_enable(); + return false; +} + +/** + * pipapo_get() - Get matching element reference given key data + * @net: Network namespace + * @set: nftables API set representation + * @data: Key data to be matched against existing elements + * @genmask: If set, check that element is active in given genmask + * + * This is essentially the same as the lookup function, except that it matches + * key data against the uncommitted copy and doesn't use preallocated maps for + * bitmap results. + * + * Return: pointer to &struct nft_pipapo_elem on match, error pointer otherwise. + */ +static struct nft_pipapo_elem *pipapo_get(const struct net *net, + const struct nft_set *set, + const u8 *data, u8 genmask) +{ + struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT); + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m = priv->clone; + unsigned long *res_map, *fill_map = NULL; + struct nft_pipapo_field *f; + int i; + + res_map = kmalloc_array(m->bsize_max, sizeof(*res_map), GFP_ATOMIC); + if (!res_map) { + ret = ERR_PTR(-ENOMEM); + goto out; + } + + fill_map = kcalloc(m->bsize_max, sizeof(*res_map), GFP_ATOMIC); + if (!fill_map) { + ret = ERR_PTR(-ENOMEM); + goto out; + } + + memset(res_map, 0xff, m->bsize_max * sizeof(*res_map)); + + nft_pipapo_for_each_field(f, i, m) { + bool last = i == m->field_count - 1; + unsigned long *lt = f->lt; + int b, group; + + /* For each 4-bit group: select lookup table bucket depending on + * packet bytes value, then AND bucket value + */ + for (group = 0; group < f->groups; group++) { + u8 v; + + if (group % 2) { + v = *data & 0x0f; + data++; + } else { + v = *data >> 4; + } + __bitmap_and(res_map, res_map, lt + v * f->bsize, + f->bsize * BITS_PER_LONG); + + lt += f->bsize * NFT_PIPAPO_BUCKETS; + } + + /* Now populate the bitmap for the next field, unless this is + * the last field, in which case return the matched 'ext' + * pointer if any. + * + * Now res_map contains the matching bitmap, and fill_map is the + * bitmap for the next field. + */ +next_match: + b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt, + last); + if (b < 0) + goto out; + + if (last) { + if (nft_set_elem_expired(&f->mt[b].e->ext) || + (genmask && + !nft_set_elem_active(&f->mt[b].e->ext, genmask))) + goto next_match; + + ret = f->mt[b].e; + goto out; + } + + data += NFT_PIPAPO_GROUPS_PADDING(f->groups); + + /* Swap bitmap indices: fill_map will be the initial bitmap for + * the next field (i.e. the new res_map), and res_map is + * guaranteed to be all-zeroes at this point, ready to be filled + * according to the next mapping table. + */ + swap(res_map, fill_map); + } + +out: + kfree(fill_map); + kfree(res_map); + return ret; +} + +/** + * nft_pipapo_get() - Get matching element reference given key data + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * @flags: Unused + */ +void *nft_pipapo_get(const struct net *net, const struct nft_set *set, + const struct nft_set_elem *elem, unsigned int flags) +{ + return pipapo_get(net, set, (const u8 *)elem->key.val.data, + nft_genmask_cur(net)); +} + +/** + * pipapo_resize() - Resize lookup or mapping table, or both + * @f: Field containing lookup and mapping tables + * @old_rules: Previous amount of rules in field + * @rules: New amount of rules + * + * Increase, decrease or maintain tables size depending on new amount of rules, + * and copy data over. In case the new size is smaller, throw away data for + * highest-numbered rules. + * + * Return: 0 on success, -ENOMEM on allocation failure. + */ +static int pipapo_resize(struct nft_pipapo_field *f, int old_rules, int rules) +{ + long *new_lt = NULL, *new_p, *old_lt = f->lt, *old_p; + union nft_pipapo_map_bucket *new_mt, *old_mt = f->mt; + size_t new_bucket_size, copy; + int group, bucket; + + new_bucket_size = DIV_ROUND_UP(rules, BITS_PER_LONG); + + if (new_bucket_size == f->bsize) + goto mt; + + if (new_bucket_size > f->bsize) + copy = f->bsize; + else + copy = new_bucket_size; + + new_lt = kvzalloc(f->groups * NFT_PIPAPO_BUCKETS * new_bucket_size * + sizeof(*new_lt), GFP_KERNEL); + if (!new_lt) + return -ENOMEM; + + new_p = new_lt; + old_p = old_lt; + for (group = 0; group < f->groups; group++) { + for (bucket = 0; bucket < NFT_PIPAPO_BUCKETS; bucket++) { + memcpy(new_p, old_p, copy * sizeof(*new_p)); + new_p += copy; + old_p += copy; + + if (new_bucket_size > f->bsize) + new_p += new_bucket_size - f->bsize; + else + old_p += f->bsize - new_bucket_size; + } + } + +mt: + new_mt = kvmalloc(rules * sizeof(*new_mt), GFP_KERNEL); + if (!new_mt) { + kvfree(new_lt); + return -ENOMEM; + } + + memcpy(new_mt, f->mt, min(old_rules, rules) * sizeof(*new_mt)); + if (rules > old_rules) { + memset(new_mt + old_rules, 0, + (rules - old_rules) * sizeof(*new_mt)); + } + + if (new_lt) { + f->bsize = new_bucket_size; + f->lt = new_lt; + kvfree(old_lt); + } + + f->mt = new_mt; + kvfree(old_mt); + + return 0; +} + +/** + * pipapo_bucket_set() - Set rule bit in bucket given group and group value + * @f: Field containing lookup table + * @rule: Rule index + * @group: Group index + * @v: Value of bit group + */ +static void pipapo_bucket_set(struct nft_pipapo_field *f, int rule, int group, + int v) +{ + unsigned long *pos; + + pos = f->lt + f->bsize * NFT_PIPAPO_BUCKETS * group; + pos += f->bsize * v; + + __set_bit(rule, pos); +} + +/** + * pipapo_insert() - Insert new rule in field given input key and mask length + * @f: Field containing lookup table + * @k: Input key for classification, without nftables padding + * @mask_bits: Length of mask; matches field length for non-ranged entry + * + * Insert a new rule reference in lookup buckets corresponding to k and + * mask_bits. + * + * Return: 1 on success (one rule inserted), negative error code on failure. + */ +static int pipapo_insert(struct nft_pipapo_field *f, const uint8_t *k, + int mask_bits) +{ + int rule = f->rules++, group, ret; + + ret = pipapo_resize(f, f->rules - 1, f->rules); + if (ret) + return ret; + + for (group = 0; group < f->groups; group++) { + int i, v; + u8 mask; + + if (group % 2) + v = k[group / 2] & 0x0f; + else + v = k[group / 2] >> 4; + + if (mask_bits >= (group + 1) * 4) { + /* Not masked */ + pipapo_bucket_set(f, rule, group, v); + } else if (mask_bits <= group * 4) { + /* Completely masked */ + for (i = 0; i < NFT_PIPAPO_BUCKETS; i++) + pipapo_bucket_set(f, rule, group, i); + } else { + /* The mask limit falls on this group */ + mask = 0x0f >> (mask_bits - group * 4); + for (i = 0; i < NFT_PIPAPO_BUCKETS; i++) { + if ((i & ~mask) == (v & ~mask)) + pipapo_bucket_set(f, rule, group, i); + } + } + } + + return 1; +} + +/** + * pipapo_step_diff() - Check if setting @step bit in netmask would change it + * @base: Mask we are expanding + * @step: Step bit for given expansion step + * @len: Total length of mask space (set and unset bits), bytes + * + * Convenience function for mask expansion. + * + * Return: true if step bit changes mask (i.e. isn't set), false otherwise. + */ +static bool pipapo_step_diff(u8 *base, int step, int len) +{ + /* Network order, byte-addressed */ +#ifdef __BIG_ENDIAN__ + return !(BIT(step % BITS_PER_BYTE) & base[step / BITS_PER_BYTE]); +#else + return !(BIT(step % BITS_PER_BYTE) & + base[len - 1 - step / BITS_PER_BYTE]); +#endif +} + +/** + * pipapo_step_after_end() - Check if mask exceeds range end with given step + * @base: Mask we are expanding + * @end: End of range + * @step: Step bit for given expansion step, highest bit to be set + * @len: Total length of mask space (set and unset bits), bytes + * + * Convenience function for mask expansion. + * + * Return: true if mask exceeds range setting step bits, false otherwise. + */ +static bool pipapo_step_after_end(const u8 *base, const u8 *end, int step, + int len) +{ + u8 tmp[NFT_PIPAPO_MAX_BYTES]; + int i; + + memcpy(tmp, base, len); + + /* Network order, byte-addressed */ + for (i = 0; i <= step; i++) +#ifdef __BIG_ENDIAN__ + tmp[i / BITS_PER_BYTE] |= BIT(i % BITS_PER_BYTE); +#else + tmp[len - 1 - i / BITS_PER_BYTE] |= BIT(i % BITS_PER_BYTE); +#endif + + return memcmp(tmp, end, len) > 0; +} + +/** + * pipapo_base_sum() - Sum step bit to given len-sized netmask base with carry + * @base: Netmask base + * @step: Step bit to sum + * @len: Netmask length, bytes + */ +static void pipapo_base_sum(u8 *base, int step, int len) +{ + bool carry = false; + int i; + + /* Network order, byte-addressed */ +#ifdef __BIG_ENDIAN__ + for (i = step / BITS_PER_BYTE; i < len; i++) { +#else + for (i = len - 1 - step / BITS_PER_BYTE; i >= 0; i--) { +#endif + if (carry) + base[i]++; + else + base[i] += 1 << (step % BITS_PER_BYTE); + + if (base[i]) + break; + + carry = true; + } +} + +/** + * pipapo_expand() - Expand to composing netmasks, insert into lookup table + * @f: Field containing lookup table + * @start: Start of range + * @end: End of range + * @len: Length of value in bits + * + * Expand range to composing netmasks and insert corresponding rule references + * in lookup buckets. + * + * Return: number of inserted rules on success, negative error code on failure. + */ +static int pipapo_expand(struct nft_pipapo_field *f, + const u8 *start, const u8 *end, int len) +{ + int step, masks = 0, bytes = DIV_ROUND_UP(len, BITS_PER_BYTE); + u8 base[NFT_PIPAPO_MAX_BYTES]; + + memcpy(base, start, bytes); + while (memcmp(base, end, bytes) <= 0) { + int err; + + step = 0; + while (pipapo_step_diff(base, step, bytes)) { + if (pipapo_step_after_end(base, end, step, bytes)) + break; + + step++; + if (step >= len) { + if (!masks) { + pipapo_insert(f, base, 0); + masks = 1; + } + goto out; + } + } + + err = pipapo_insert(f, base, len - step); + + if (err < 0) + return err; + + masks++; + pipapo_base_sum(base, step, bytes); + } +out: + return masks; +} + +/** + * pipapo_map() - Insert rules in mapping tables, mapping them between fields + * @m: Matching data, including mapping table + * @map: Table of rule maps: array of first rule and amount of rules + * in next field a given rule maps to, for each field + * @ext: For last field, nft_set_ext pointer matching rules map to + */ +static void pipapo_map(struct nft_pipapo_match *m, + union nft_pipapo_map_bucket map[NFT_PIPAPO_MAX_FIELDS], + struct nft_pipapo_elem *e) +{ + struct nft_pipapo_field *f; + int i, j; + + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) { + for (j = 0; j < map[i].n; j++) { + f->mt[map[i].to + j].to = map[i + 1].to; + f->mt[map[i].to + j].n = map[i + 1].n; + } + } + + /* Last field: map to ext instead of mapping to next field */ + for (j = 0; j < map[i].n; j++) + f->mt[map[i].to + j].e = e; +} + +/** + * pipapo_realloc_scratch() - Reallocate scratch maps for partial match results + * @clone: Copy of matching data with pending insertions and deletions + * @bsize_max Maximum bucket size, scratch maps cover two buckets + * + * Return: 0 on success, -ENOMEM on failure. + */ +static int pipapo_realloc_scratch(struct nft_pipapo_match *clone, + unsigned long bsize_max) +{ + int i; + + for_each_possible_cpu(i) { + unsigned long *scratch; + + scratch = kzalloc_node(bsize_max * sizeof(*scratch) * 2, + GFP_KERNEL, cpu_to_node(i)); + if (!scratch) { + /* On failure, there's no need to undo previous + * allocations: this means that some scratch maps have + * a bigger allocated size now (this is only called on + * insertion), but the extra space won't be used by any + * CPU as new elements are not inserted and m->bsize_max + * is not updated. + */ + return -ENOMEM; + } + + kfree(*per_cpu_ptr(clone->scratch, i)); + + *per_cpu_ptr(clone->scratch, i) = scratch; + } + + return 0; +} + +/** + * nft_pipapo_insert() - Validate and insert ranged elements + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * @ext2: Filled with pointer to &struct nft_set_ext in inserted element + * + * Return: 0 on success, error pointer on failure. + */ +static int nft_pipapo_insert(const struct net *net, const struct nft_set *set, + const struct nft_set_elem *elem, + struct nft_set_ext **ext2) +{ + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); + union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; + const u8 *start = (const u8 *)elem->key.val.data, *end; + struct nft_pipapo_elem *e = elem->priv, *dup; + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m = priv->clone; + u8 genmask = nft_genmask_next(net); + struct nft_pipapo_field *f; + int i, bsize_max, err = 0; + + dup = pipapo_get(net, set, start, genmask); + if (PTR_ERR(dup) == -ENOENT) { + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END)) { + end = (const u8 *)nft_set_ext_key_end(ext)->data; + dup = pipapo_get(net, set, end, nft_genmask_next(net)); + } else { + end = start; + } + } + + if (PTR_ERR(dup) != -ENOENT) { + if (IS_ERR(dup)) + return PTR_ERR(dup); + *ext2 = &dup->ext; + return -EEXIST; + } + + /* Validate */ + nft_pipapo_for_each_field(f, i, m) { + const u8 *start_p = start, *end_p = end; + + if (f->rules >= (unsigned long)NFT_PIPAPO_RULE0_MAX) + return -ENOSPC; + + if (memcmp(start_p, end_p, + f->groups / NFT_PIPAPO_GROUPS_PER_BYTE) > 0) + return -EINVAL; + + start_p += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + end_p += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + } + + /* Insert */ + priv->dirty = true; + + bsize_max = m->bsize_max; + + nft_pipapo_for_each_field(f, i, m) { + int ret; + + rulemap[i].to = f->rules; + + ret = memcmp(start, end, + f->groups / NFT_PIPAPO_GROUPS_PER_BYTE); + if (!ret) { + ret = pipapo_insert(f, start, + f->groups * NFT_PIPAPO_GROUP_BITS); + } else { + ret = pipapo_expand(f, start, end, + f->groups * NFT_PIPAPO_GROUP_BITS); + } + + if (f->bsize > bsize_max) + bsize_max = f->bsize; + + rulemap[i].n = ret; + + start += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + } + + if (!*this_cpu_ptr(m->scratch) || bsize_max > m->bsize_max) { + err = pipapo_realloc_scratch(m, bsize_max); + if (err) + return err; + + this_cpu_write(nft_pipapo_scratch_index, false); + + m->bsize_max = bsize_max; + } + + *ext2 = &e->ext; + + pipapo_map(m, rulemap, e); + + return 0; +} + +/** + * pipapo_clone() - Clone matching data to create new working copy + * @old: Existing matching data + * + * Return: copy of matching data passed as 'old', error pointer on failure + */ +static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old) +{ + struct nft_pipapo_field *dst, *src; + struct nft_pipapo_match *new; + int i; + + new = kmalloc(sizeof(*new) + sizeof(*dst) * old->field_count, + GFP_KERNEL); + if (!new) + return ERR_PTR(-ENOMEM); + + new->field_count = old->field_count; + new->bsize_max = old->bsize_max; + + new->scratch = alloc_percpu(*new->scratch); + if (!new->scratch) + goto out_scratch; + + rcu_head_init(&new->rcu); + + src = old->f; + dst = new->f; + + for (i = 0; i < old->field_count; i++) { + memcpy(dst, src, offsetof(struct nft_pipapo_field, lt)); + + dst->lt = kvzalloc(src->groups * NFT_PIPAPO_BUCKETS * + src->bsize * sizeof(*dst->lt), + GFP_KERNEL); + if (!dst->lt) + goto out_lt; + + memcpy(dst->lt, src->lt, + src->bsize * sizeof(*dst->lt) * + src->groups * NFT_PIPAPO_BUCKETS); + + dst->mt = kvmalloc(src->rules * sizeof(*src->mt), GFP_KERNEL); + if (!dst->mt) + goto out_mt; + + memcpy(dst->mt, src->mt, src->rules * sizeof(*src->mt)); + src++; + dst++; + } + + return new; + +out_mt: + kvfree(dst->lt); +out_lt: + for (dst--; i > 0; i--) { + kvfree(dst->mt); + kvfree(dst->lt); + dst--; + } + free_percpu(new->scratch); +out_scratch: + kfree(new); + + return ERR_PTR(-ENOMEM); +} + +/** + * pipapo_rules_same_key() - Get number of rules originated from the same entry + * @f: Field containing mapping table + * @first: Index of first rule in set of rules mapping to same entry + * + * Using the fact that all rules in a field that originated from the same entry + * will map to the same set of rules in the next field, or to the same element + * reference, return the cardinality of the set of rules that originated from + * the same entry as the rule with index @first, @first rule included. + * + * In pictures: + * rules + * field #0 0 1 2 3 4 + * map to: 0 1 2-4 2-4 5-9 + * . . ....... . ... + * | | | | \ \ + * | | | | \ \ + * | | | | \ \ + * ' ' ' ' ' \ + * in field #1 0 1 2 3 4 5 ... + * + * if this is called for rule 2 on field #0, it will return 3, as also rules 2 + * and 3 in field 0 map to the same set of rules (2, 3, 4) in the next field. + * + * For the last field in a set, we can rely on associated entries to map to the + * same element references. + * + * Return: Number of rules that originated from the same entry as @first. + */ +static int pipapo_rules_same_key(struct nft_pipapo_field *f, int first) +{ + struct nft_pipapo_elem *e = NULL; /* Keep gcc happy */ + int r; + + for (r = first; r < f->rules; r++) { + if (r != first && e != f->mt[r].e) + return r - first; + + e = f->mt[r].e; + } + + if (r != first) + return r - first; + + return 0; +} + +/** + * pipapo_unmap() - Remove rules from mapping tables, renumber remaining ones + * @mt: Mapping array + * @rules: Original amount of rules in mapping table + * @start: First rule index to be removed + * @n: Amount of rules to be removed + * @to_offset: First rule index, in next field, this group of rules maps to + * @is_last: If this is the last field, delete reference from mapping array + * + * This is used to unmap rules from the mapping table for a single field, + * maintaining consistency and compactness for the existing ones. + * + * In pictures: let's assume that we want to delete rules 2 and 3 from the + * following mapping array: + * + * rules + * 0 1 2 3 4 + * map to: 4-10 4-10 11-15 11-15 16-18 + * + * the result will be: + * + * rules + * 0 1 2 + * map to: 4-10 4-10 11-13 + * + * for fields before the last one. In case this is the mapping table for the + * last field in a set, and rules map to pointers to &struct nft_pipapo_elem: + * + * rules + * 0 1 2 3 4 + * element pointers: 0x42 0x42 0x33 0x33 0x44 + * + * the result will be: + * + * rules + * 0 1 2 + * element pointers: 0x42 0x42 0x44 + */ +static void pipapo_unmap(union nft_pipapo_map_bucket *mt, int rules, + int start, int n, int to_offset, bool is_last) +{ + int i; + + memmove(mt + start, mt + start + n, (rules - start - n) * sizeof(*mt)); + memset(mt + rules - n, 0, n * sizeof(*mt)); + + if (is_last) + return; + + for (i = start; i < rules - n; i++) + mt[i].to -= to_offset; +} + +/** + * pipapo_drop() - Delete entry from lookup and mapping tables, given rule map + * @m: Matching data + * @rulemap Table of rule maps, arrays of first rule and amount of rules + * in next field a given entry maps to, for each field + * + * For each rule in lookup table buckets mapping to this set of rules, drop + * all bits set in lookup table mapping. In pictures, assuming we want to drop + * rules 0 and 1 from this lookup table: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0 1,2 + * 1 1,2 0 + * 2 0 1,2 + * 3 0 1,2 + * 4 0,1,2 + * 5 0 1 2 + * 6 0,1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * 7 1,2 1,2 1 1 1 0,1 1 1 1 1 1 1 1 1 1 1 + * + * rule 2 becomes rule 0, and the result will be: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 0 + * 1 0 + * 2 0 + * 3 0 + * 4 0 + * 5 0 + * 6 0 + * 7 0 0 + * + * once this is done, call unmap() to drop all the corresponding rule references + * from mapping tables. + */ +static void pipapo_drop(struct nft_pipapo_match *m, + union nft_pipapo_map_bucket rulemap[]) +{ + struct nft_pipapo_field *f; + int i; + + nft_pipapo_for_each_field(f, i, m) { + int g; + + for (g = 0; g < f->groups; g++) { + unsigned long *pos; + int b; + + pos = f->lt + g * NFT_PIPAPO_BUCKETS * f->bsize; + + for (b = 0; b < NFT_PIPAPO_BUCKETS; b++) { + bitmap_cut(pos, pos, rulemap[i].to, + rulemap[i].n, + f->bsize * BITS_PER_LONG); + + pos += f->bsize; + } + } + + pipapo_unmap(f->mt, f->rules, rulemap[i].to, rulemap[i].n, + rulemap[i + 1].n, i == m->field_count - 1); + if (pipapo_resize(f, f->rules, f->rules - rulemap[i].n)) { + /* We can ignore this, a failure to shrink tables down + * doesn't make tables invalid. + */ + ; + } + f->rules -= rulemap[i].n; + } +} + +/** + * pipapo_gc() - Drop expired entries from set, destroy start and end elements + * @set: nftables API set representation + * @m: Matching data + */ +static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m) +{ + struct nft_pipapo *priv = nft_set_priv(set); + int rules_f0, first_rule = 0; + + while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) { + union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; + struct nft_pipapo_field *f; + struct nft_pipapo_elem *e; + int i, start, rules_fx; + + start = first_rule; + rules_fx = rules_f0; + + nft_pipapo_for_each_field(f, i, m) { + rulemap[i].to = start; + rulemap[i].n = rules_fx; + + if (i < m->field_count - 1) { + rules_fx = f->mt[start].n; + start = f->mt[start].to; + } + } + + /* Pick the last field, and its last index */ + f--; + i--; + e = f->mt[rulemap[i].to].e; + if (nft_set_elem_expired(&e->ext) && + !nft_set_elem_mark_busy(&e->ext)) { + priv->dirty = true; + pipapo_drop(m, rulemap); + + rcu_barrier(); + nft_set_elem_destroy(set, e, true); + + /* And check again current first rule, which is now the + * first we haven't checked. + */ + } else { + first_rule += rules_f0; + } + } + + priv->last_gc = jiffies; +} + +/** + * pipapo_free_fields() - Free per-field tables contained in matching data + * @m: Matching data + */ +static void pipapo_free_fields(struct nft_pipapo_match *m) +{ + struct nft_pipapo_field *f; + int i; + + nft_pipapo_for_each_field(f, i, m) { + kvfree(f->lt); + kvfree(f->mt); + } +} + +/** + * pipapo_reclaim_match - RCU callback to free fields from old matching data + * @rcu: RCU head + */ +static void pipapo_reclaim_match(struct rcu_head *rcu) +{ + struct nft_pipapo_match *m; + int i; + + m = container_of(rcu, struct nft_pipapo_match, rcu); + + for_each_possible_cpu(i) + kfree(*per_cpu_ptr(m->scratch, i)); + + free_percpu(m->scratch); + + pipapo_free_fields(m); + + kfree(m); +} + +/** + * pipapo_commit() - Replace lookup data with current working copy + * @set: nftables API set representation + * + * While at it, check if we should perform garbage collection on the working + * copy before committing it for lookup, and don't replace the table if the + * working copy doesn't have pending changes. + * + * We also need to create a new working copy for subsequent insertions and + * deletions. + */ +static void pipapo_commit(const struct nft_set *set) +{ + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *new_clone, *old; + + if (time_after_eq(jiffies, priv->last_gc + nft_set_gc_interval(set))) + pipapo_gc(set, priv->clone); + + if (!priv->dirty) + return; + + new_clone = pipapo_clone(priv->clone); + if (IS_ERR(new_clone)) + return; + + priv->dirty = false; + + old = rcu_access_pointer(priv->match); + rcu_assign_pointer(priv->match, priv->clone); + if (old) + call_rcu(&old->rcu, pipapo_reclaim_match); + + priv->clone = new_clone; +} + +/** + * nft_pipapo_activate() - Mark element reference as active given key, commit + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * + * On insertion, elements are added to a copy of the matching data currently + * in use for lookups, and not directly inserted into current lookup data, so + * we'll take care of that by calling pipapo_commit() here. Both + * nft_pipapo_insert() and nft_pipapo_activate() are called once for each + * element, hence we can't purpose either one as a real commit operation. + */ +static void nft_pipapo_activate(const struct net *net, + const struct nft_set *set, + const struct nft_set_elem *elem) +{ + struct nft_pipapo_elem *e; + + e = pipapo_get(net, set, (const u8 *)elem->key.val.data, 0); + if (IS_ERR(e)) + return; + + nft_set_elem_change_active(net, set, &e->ext); + nft_set_elem_clear_busy(&e->ext); + + pipapo_commit(set); +} + +/** + * pipapo_deactivate() - Check that element is in set, mark as inactive + * @net: Network namespace + * @set: nftables API set representation + * @data: Input key data + * @ext: nftables API extension pointer, used to check for end element + * + * This is a convenience function that can be called from both + * nft_pipapo_deactivate() and nft_pipapo_flush(), as they are in fact the same + * operation. + * + * Return: deactivated element if found, NULL otherwise. + */ +static void *pipapo_deactivate(const struct net *net, const struct nft_set *set, + const u8 *data, const struct nft_set_ext *ext) +{ + struct nft_pipapo_elem *e; + + e = pipapo_get(net, set, data, nft_genmask_next(net)); + if (IS_ERR(e)) + return NULL; + + nft_set_elem_change_active(net, set, &e->ext); + + return e; +} + +/** + * nft_pipapo_deactivate() - Call pipapo_deactivate() to make element inactive + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * + * Return: deactivated element if found, NULL otherwise. + */ +static void *nft_pipapo_deactivate(const struct net *net, + const struct nft_set *set, + const struct nft_set_elem *elem) +{ + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); + + return pipapo_deactivate(net, set, (const u8 *)elem->key.val.data, ext); +} + +/** + * nft_pipapo_flush() - Call pipapo_deactivate() to make element inactive + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * + * This is functionally the same as nft_pipapo_deactivate(), with a slightly + * different interface, and it's also called once for each element in a set + * being flushed, so we can't implement, strictly speaking, a flush operation, + * which would otherwise be as simple as allocating an empty copy of the + * matching data. + * + * Note that we could in theory do that, mark the set as flushed, and ignore + * subsequent calls, but we would leak all the elements after the first one, + * because they wouldn't then be freed as result of API calls. + * + * Return: true if element was found and deactivated. + */ +static bool nft_pipapo_flush(const struct net *net, const struct nft_set *set, + void *elem) +{ + struct nft_pipapo_elem *e = elem; + + return pipapo_deactivate(net, set, (const u8 *)nft_set_ext_key(&e->ext), + &e->ext); +} + +/** + * pipapo_get_boundaries() - Get byte interval for associated rules + * @f: Field including lookup table + * @first_rule: First rule (lowest index) + * @rule_count: Number of associated rules + * @left: Byte expression for left boundary (start of range) + * @right: Byte expression for right boundary (end of range) + * + * Given the first rule and amount of rules that originated from the same entry, + * build the original range associated with the entry, and calculate the length + * of the originating netmask. + * + * In pictures: + * + * bucket + * group 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + * 0 1,2 + * 1 1,2 + * 2 1,2 + * 3 1,2 + * 4 1,2 + * 5 1 2 + * 6 1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * 7 1,2 1,2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 + * + * this is the lookup table corresponding to the IPv4 range + * 192.168.1.0-192.168.2.1, which was expanded to the two composing netmasks, + * rule #1: 192.168.1.0/24, and rule #2: 192.168.2.0/31. + * + * This function fills @left and @right with the byte values of the leftmost + * and rightmost bucket indices for the lowest and highest rule indices, + * respectively. If @first_rule is 1 and @rule_count is 2, we obtain, in + * nibbles: + * left: < 12, 0, 10, 8, 0, 1, 0, 0 > + * right: < 12, 0, 10, 8, 0, 2, 2, 1 > + * corresponding to bytes: + * left: < 192, 168, 1, 0 > + * right: < 192, 168, 2, 1 > + * with mask length irrelevant here, unused on return, as the range is already + * defined by its start and end points. The mask length is relevant for a single + * ranged entry instead: if @first_rule is 1 and @rule_count is 1, we ignore + * rule 2 above: @left becomes < 192, 168, 1, 0 >, @right becomes + * < 192, 168, 1, 255 >, and the mask length, calculated from the distances + * between leftmost and rightmost bucket indices for each group, would be 24. + * + * Return: mask length, in bits. + */ +static int pipapo_get_boundaries(struct nft_pipapo_field *f, int first_rule, + int rule_count, u8 *left, u8 *right) +{ + u8 *l = left, *r = right; + int g, mask_len = 0; + + for (g = 0; g < f->groups; g++) { + int b, x0, x1; + + x0 = -1; + x1 = -1; + for (b = 0; b < NFT_PIPAPO_BUCKETS; b++) { + unsigned long *pos; + + pos = f->lt + (g * NFT_PIPAPO_BUCKETS + b) * f->bsize; + if (test_bit(first_rule, pos) && x0 == -1) + x0 = b; + if (test_bit(first_rule + rule_count - 1, pos)) + x1 = b; + } + + if (g % 2) { + *(l++) |= x0 & 0x0f; + *(r++) |= x1 & 0x0f; + } else { + *l |= x0 << 4; + *r |= x1 << 4; + } + + if (x1 - x0 == 0) + mask_len += 4; + else if (x1 - x0 == 1) + mask_len += 3; + else if (x1 - x0 == 3) + mask_len += 2; + else if (x1 - x0 == 7) + mask_len += 1; + } + + return mask_len; +} + +/** + * pipapo_match_field() - Match rules against byte ranges + * @f: Field including the lookup table + * @first_rule: First of associated rules originating from same entry + * @rule_count: Amount of associated rules + * @start: Start of range to be matched + * @end: End of range to be matched + * + * Return: true on match, false otherwise. + */ +static bool pipapo_match_field(struct nft_pipapo_field *f, + int first_rule, int rule_count, + const u8 *start, const u8 *end) +{ + u8 right[NFT_PIPAPO_MAX_BYTES] = { 0 }; + u8 left[NFT_PIPAPO_MAX_BYTES] = { 0 }; + + pipapo_get_boundaries(f, first_rule, rule_count, left, right); + + return !memcmp(start, left, f->groups / NFT_PIPAPO_GROUPS_PER_BYTE) && + !memcmp(end, right, f->groups / NFT_PIPAPO_GROUPS_PER_BYTE); +} + +/** + * nft_pipapo_remove() - Remove element given key, commit + * @net: Network namespace + * @set: nftables API set representation + * @elem: nftables API element representation containing key data + * + * Similarly to nft_pipapo_activate(), this is used as commit operation by the + * API, but it's called once per element in the pending transaction, so we can't + * implement this as a single commit operation. Closest we can get is to remove + * the matched element here, if any, and commit the updated matching data. + */ +static void nft_pipapo_remove(const struct net *net, const struct nft_set *set, + const struct nft_set_elem *elem) +{ + const u8 *data = (const u8 *)elem->key.val.data; + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m = priv->clone; + int rules_f0, first_rule = 0; + struct nft_pipapo_elem *e; + + e = pipapo_get(net, set, data, 0); + if (IS_ERR(e)) + return; + + while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) { + union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; + const u8 *match_start, *match_end; + struct nft_pipapo_field *f; + int i, start, rules_fx; + + match_start = data; + match_end = (const u8 *)nft_set_ext_key_end(&e->ext)->data; + + start = first_rule; + rules_fx = rules_f0; + + nft_pipapo_for_each_field(f, i, m) { + if (!pipapo_match_field(f, start, rules_fx, + match_start, match_end)) + break; + + rulemap[i].to = start; + rulemap[i].n = rules_fx; + + rules_fx = f->mt[start].n; + start = f->mt[start].to; + + match_start += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + match_end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f->groups); + } + + if (i == m->field_count) { + priv->dirty = true; + pipapo_drop(m, rulemap); + pipapo_commit(set); + return; + } + + first_rule += rules_f0; + } +} + +/** + * nft_pipapo_walk() - Walk over elements + * @ctx: nftables API context + * @set: nftables API set representation + * @iter: Iterator + * + * As elements are referenced in the mapping array for the last field, directly + * scan that array: there's no need to follow rule mappings from the first + * field. + */ +static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set, + struct nft_set_iter *iter) +{ + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m; + struct nft_pipapo_field *f; + int i, r; + + rcu_read_lock(); + m = rcu_dereference(priv->match); + + if (unlikely(!m)) + goto out; + + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) + ; + + for (r = 0; r < f->rules; r++) { + struct nft_pipapo_elem *e; + struct nft_set_elem elem; + + if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) + continue; + + if (iter->count < iter->skip) + goto cont; + + e = f->mt[r].e; + if (nft_set_elem_expired(&e->ext)) + goto cont; + + elem.priv = e; + + iter->err = iter->fn(ctx, set, iter, &elem); + if (iter->err < 0) + goto out; + +cont: + iter->count++; + } + +out: + rcu_read_unlock(); +} + +/** + * nft_pipapo_privsize() - Return the size of private data for the set + * @nla: netlink attributes, ignored as size doesn't depend on them + * @desc: Set description, ignored as size doesn't depend on it + * + * Return: size of private data for this set implementation, in bytes + */ +static u64 nft_pipapo_privsize(const struct nlattr * const nla[], + const struct nft_set_desc *desc) +{ + return sizeof(struct nft_pipapo); +} + +/** + * nft_pipapo_estimate() - Estimate set size, space and lookup complexity + * @desc: Set description, element count and field description used here + * @features: Flags: NFT_SET_INTERVAL needs to be there + * @est: Storage for estimation data + * + * The size for this set type can vary dramatically, as it depends on the number + * of rules (composing netmasks) the entries expand to. We compute the worst + * case here. + * + * In general, for a non-ranged entry or a single composing netmask, we need + * one bit in each of the sixteen NFT_PIPAPO_BUCKETS, for each 4-bit group (that + * is, each input bit needs four bits of matching data), plus a bucket in the + * mapping table for each field. + * + * Return: true only for compatible range concatenations + */ +static bool nft_pipapo_estimate(const struct nft_set_desc *desc, u32 features, + struct nft_set_estimate *est) +{ + unsigned long entry_size; + int i; + + if (!(features & NFT_SET_INTERVAL) || desc->field_count <= 1) + return false; + + for (i = 0, entry_size = 0; i < desc->field_count; i++) { + unsigned long rules; + + if (desc->field_len[i] > NFT_PIPAPO_MAX_BYTES) + return false; + + /* Worst-case ranges for each concatenated field: each n-bit + * field can expand to up to n * 2 rules in each bucket, and + * each rule also needs a mapping bucket. + */ + rules = ilog2(desc->field_len[i] * BITS_PER_BYTE) * 2; + entry_size += rules * NFT_PIPAPO_BUCKETS / BITS_PER_BYTE; + entry_size += rules * sizeof(union nft_pipapo_map_bucket); + } + + /* Rules in lookup and mapping tables are needed for each entry */ + est->size = desc->size * entry_size; + if (est->size && div_u64(est->size, desc->size) != entry_size) + return false; + + est->size += sizeof(struct nft_pipapo) + + sizeof(struct nft_pipapo_match) * 2; + + est->size += sizeof(struct nft_pipapo_field) * desc->field_count; + + est->lookup = NFT_SET_CLASS_O_LOG_N; + + est->space = NFT_SET_CLASS_O_N; + + return true; +} + +/** + * nft_pipapo_init() - Initialise data for a set instance + * @set: nftables API set representation + * @desc: Set description + * @nla: netlink attributes + * + * Validate number and size of fields passed as NFTA_SET_DESC_CONCAT netlink + * attributes, initialise internal set parameters, current instance of matching + * data and a copy for subsequent insertions. + * + * Return: 0 on success, negative error code on failure. + */ +static int nft_pipapo_init(const struct nft_set *set, + const struct nft_set_desc *desc, + const struct nlattr * const nla[]) +{ + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m; + struct nft_pipapo_field *f; + int err, i; + + if (desc->field_count > NFT_PIPAPO_MAX_FIELDS) + return -EINVAL; + + m = kmalloc(sizeof(*priv->match) + sizeof(*f) * desc->field_count, + GFP_KERNEL); + if (!m) + return -ENOMEM; + + m->field_count = desc->field_count; + m->bsize_max = 0; + + m->scratch = alloc_percpu(unsigned long *); + if (!m->scratch) { + err = -ENOMEM; + goto out_free; + } + for_each_possible_cpu(i) + *per_cpu_ptr(m->scratch, i) = NULL; + + rcu_head_init(&m->rcu); + + nft_pipapo_for_each_field(f, i, m) { + f->groups = desc->field_len[i] * NFT_PIPAPO_GROUPS_PER_BYTE; + priv->groups += f->groups; + + priv->width += round_up(desc->field_len[i], sizeof(u32)); + + f->bsize = 0; + f->rules = 0; + f->lt = NULL; + f->mt = NULL; + } + + /* Create an initial clone of matching data for next insertion */ + priv->clone = pipapo_clone(m); + if (IS_ERR(priv->clone)) { + err = PTR_ERR(priv->clone); + goto out_free; + } + + priv->dirty = false; + + rcu_assign_pointer(priv->match, m); + + return 0; + +out_free: + free_percpu(m->scratch); + kfree(m); + + return err; +} + +/** + * nft_pipapo_destroy() - Free private data for set and all committed elements + * @set: nftables API set representation + */ +static void nft_pipapo_destroy(const struct nft_set *set) +{ + struct nft_pipapo *priv = nft_set_priv(set); + struct nft_pipapo_match *m; + struct nft_pipapo_field *f; + int i, r, cpu; + + m = rcu_dereference_protected(priv->match, true); + if (m) { + rcu_barrier(); + + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) + ; + + for (r = 0; r < f->rules; r++) { + struct nft_pipapo_elem *e; + + if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) + continue; + + e = f->mt[r].e; + + nft_set_elem_destroy(set, e, true); + } + + for_each_possible_cpu(cpu) + kfree(*per_cpu_ptr(m->scratch, cpu)); + free_percpu(m->scratch); + + pipapo_free_fields(m); + kfree(m); + priv->match = NULL; + } + + if (priv->clone) { + for_each_possible_cpu(cpu) + kfree(*per_cpu_ptr(priv->clone->scratch, cpu)); + free_percpu(priv->clone->scratch); + + pipapo_free_fields(priv->clone); + kfree(priv->clone); + priv->clone = NULL; + } +} + +/** + * nft_pipapo_gc_init() - Initialise garbage collection + * @set: nftables API set representation + * + * Instead of actually setting up a periodic work for garbage collection, as + * this operation requires a swap of matching data with the working copy, we'll + * do that opportunistically with other commit operations if the interval is + * elapsed, so we just need to set the current jiffies timestamp here. + */ +static void nft_pipapo_gc_init(const struct nft_set *set) +{ + struct nft_pipapo *priv = nft_set_priv(set); + + priv->last_gc = jiffies; +} + +struct nft_set_type nft_set_pipapo_type __read_mostly = { + .owner = THIS_MODULE, + .features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | + NFT_SET_TIMEOUT, + .ops = { + .lookup = nft_pipapo_lookup, + .insert = nft_pipapo_insert, + .activate = nft_pipapo_activate, + .deactivate = nft_pipapo_deactivate, + .flush = nft_pipapo_flush, + .remove = nft_pipapo_remove, + .walk = nft_pipapo_walk, + .get = nft_pipapo_get, + .privsize = nft_pipapo_privsize, + .estimate = nft_pipapo_estimate, + .init = nft_pipapo_init, + .destroy = nft_pipapo_destroy, + .gc_init = nft_pipapo_gc_init, + .elemsize = offsetof(struct nft_pipapo_elem, ext), + }, +}; -- cgit v1.2.3-71-gd317 From 2e24cd755552350b94a7617617c6877b8cbcb701 Mon Sep 17 00:00:00 2001 From: Cong Wang Date: Thu, 23 Jan 2020 16:26:18 -0800 Subject: net_sched: fix ops->bind_class() implementations The current implementations of ops->bind_class() are merely searching for classid and updating class in the struct tcf_result, without invoking either of cl_ops->bind_tcf() or cl_ops->unbind_tcf(). This breaks the design of them as qdisc's like cbq use them to count filters too. This is why syzbot triggered the warning in cbq_destroy_class(). In order to fix this, we have to call cl_ops->bind_tcf() and cl_ops->unbind_tcf() like the filter binding path. This patch does so by refactoring out two helper functions __tcf_bind_filter() and __tcf_unbind_filter(), which are lockless and accept a Qdisc pointer, then teaching each implementation to call them correctly. Note, we merely pass the Qdisc pointer as an opaque pointer to each filter, they only need to pass it down to the helper functions without understanding it at all. Fixes: 07d79fc7d94e ("net_sched: add reverse binding for tc class") Reported-and-tested-by: syzbot+0a0596220218fcb603a8@syzkaller.appspotmail.com Reported-and-tested-by: syzbot+63bdb6006961d8c917c6@syzkaller.appspotmail.com Cc: Jamal Hadi Salim Cc: Jiri Pirko Signed-off-by: Cong Wang Signed-off-by: David S. Miller --- include/net/pkt_cls.h | 33 +++++++++++++++++++-------------- include/net/sch_generic.h | 3 ++- net/sched/cls_basic.c | 11 ++++++++--- net/sched/cls_bpf.c | 11 ++++++++--- net/sched/cls_flower.c | 11 ++++++++--- net/sched/cls_fw.c | 11 ++++++++--- net/sched/cls_matchall.c | 11 ++++++++--- net/sched/cls_route.c | 11 ++++++++--- net/sched/cls_rsvp.h | 11 ++++++++--- net/sched/cls_tcindex.c | 11 ++++++++--- net/sched/cls_u32.c | 11 ++++++++--- net/sched/sch_api.c | 6 ++++-- 12 files changed, 97 insertions(+), 44 deletions(-) (limited to 'include') diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index ce036492986a..a972244ab193 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -141,31 +141,38 @@ __cls_set_class(unsigned long *clp, unsigned long cl) return xchg(clp, cl); } -static inline unsigned long -cls_set_class(struct Qdisc *q, unsigned long *clp, unsigned long cl) +static inline void +__tcf_bind_filter(struct Qdisc *q, struct tcf_result *r, unsigned long base) { - unsigned long old_cl; + unsigned long cl; - sch_tree_lock(q); - old_cl = __cls_set_class(clp, cl); - sch_tree_unlock(q); - return old_cl; + cl = q->ops->cl_ops->bind_tcf(q, base, r->classid); + cl = __cls_set_class(&r->class, cl); + if (cl) + q->ops->cl_ops->unbind_tcf(q, cl); } static inline void tcf_bind_filter(struct tcf_proto *tp, struct tcf_result *r, unsigned long base) { struct Qdisc *q = tp->chain->block->q; - unsigned long cl; /* Check q as it is not set for shared blocks. In that case, * setting class is not supported. */ if (!q) return; - cl = q->ops->cl_ops->bind_tcf(q, base, r->classid); - cl = cls_set_class(q, &r->class, cl); - if (cl) + sch_tree_lock(q); + __tcf_bind_filter(q, r, base); + sch_tree_unlock(q); +} + +static inline void +__tcf_unbind_filter(struct Qdisc *q, struct tcf_result *r) +{ + unsigned long cl; + + if ((cl = __cls_set_class(&r->class, 0)) != 0) q->ops->cl_ops->unbind_tcf(q, cl); } @@ -173,12 +180,10 @@ static inline void tcf_unbind_filter(struct tcf_proto *tp, struct tcf_result *r) { struct Qdisc *q = tp->chain->block->q; - unsigned long cl; if (!q) return; - if ((cl = __cls_set_class(&r->class, 0)) != 0) - q->ops->cl_ops->unbind_tcf(q, cl); + __tcf_unbind_filter(q, r); } struct tcf_exts { diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index fceddf89592a..151208704ed2 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -318,7 +318,8 @@ struct tcf_proto_ops { void *type_data); void (*hw_del)(struct tcf_proto *tp, void *type_data); - void (*bind_class)(void *, u32, unsigned long); + void (*bind_class)(void *, u32, unsigned long, + void *, unsigned long); void * (*tmplt_create)(struct net *net, struct tcf_chain *chain, struct nlattr **tca, diff --git a/net/sched/cls_basic.c b/net/sched/cls_basic.c index 4aafbe3d435c..f256a7c69093 100644 --- a/net/sched/cls_basic.c +++ b/net/sched/cls_basic.c @@ -263,12 +263,17 @@ skip: } } -static void basic_bind_class(void *fh, u32 classid, unsigned long cl) +static void basic_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct basic_filter *f = fh; - if (f && f->res.classid == classid) - f->res.class = cl; + if (f && f->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &f->res, base); + else + __tcf_unbind_filter(q, &f->res); + } } static int basic_dump(struct net *net, struct tcf_proto *tp, void *fh, diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index 8229ed4a67be..6e3e63db0e01 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -631,12 +631,17 @@ nla_put_failure: return -1; } -static void cls_bpf_bind_class(void *fh, u32 classid, unsigned long cl) +static void cls_bpf_bind_class(void *fh, u32 classid, unsigned long cl, + void *q, unsigned long base) { struct cls_bpf_prog *prog = fh; - if (prog && prog->res.classid == classid) - prog->res.class = cl; + if (prog && prog->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &prog->res, base); + else + __tcf_unbind_filter(q, &prog->res); + } } static void cls_bpf_walk(struct tcf_proto *tp, struct tcf_walker *arg, diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index b0f42e62dd76..f9c0d1e8d380 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2765,12 +2765,17 @@ nla_put_failure: return -EMSGSIZE; } -static void fl_bind_class(void *fh, u32 classid, unsigned long cl) +static void fl_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct cls_fl_filter *f = fh; - if (f && f->res.classid == classid) - f->res.class = cl; + if (f && f->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &f->res, base); + else + __tcf_unbind_filter(q, &f->res); + } } static bool fl_delete_empty(struct tcf_proto *tp) diff --git a/net/sched/cls_fw.c b/net/sched/cls_fw.c index c9496c920d6f..ec945294626a 100644 --- a/net/sched/cls_fw.c +++ b/net/sched/cls_fw.c @@ -419,12 +419,17 @@ nla_put_failure: return -1; } -static void fw_bind_class(void *fh, u32 classid, unsigned long cl) +static void fw_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct fw_filter *f = fh; - if (f && f->res.classid == classid) - f->res.class = cl; + if (f && f->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &f->res, base); + else + __tcf_unbind_filter(q, &f->res); + } } static struct tcf_proto_ops cls_fw_ops __read_mostly = { diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c index 7fc2eb62aa98..039cc86974f4 100644 --- a/net/sched/cls_matchall.c +++ b/net/sched/cls_matchall.c @@ -393,12 +393,17 @@ nla_put_failure: return -1; } -static void mall_bind_class(void *fh, u32 classid, unsigned long cl) +static void mall_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct cls_mall_head *head = fh; - if (head && head->res.classid == classid) - head->res.class = cl; + if (head && head->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &head->res, base); + else + __tcf_unbind_filter(q, &head->res); + } } static struct tcf_proto_ops cls_mall_ops __read_mostly = { diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c index 2d9e0b4484ea..6f8786b06bde 100644 --- a/net/sched/cls_route.c +++ b/net/sched/cls_route.c @@ -641,12 +641,17 @@ nla_put_failure: return -1; } -static void route4_bind_class(void *fh, u32 classid, unsigned long cl) +static void route4_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct route4_filter *f = fh; - if (f && f->res.classid == classid) - f->res.class = cl; + if (f && f->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &f->res, base); + else + __tcf_unbind_filter(q, &f->res); + } } static struct tcf_proto_ops cls_route4_ops __read_mostly = { diff --git a/net/sched/cls_rsvp.h b/net/sched/cls_rsvp.h index 2f3c03b25d5d..c22624131949 100644 --- a/net/sched/cls_rsvp.h +++ b/net/sched/cls_rsvp.h @@ -738,12 +738,17 @@ nla_put_failure: return -1; } -static void rsvp_bind_class(void *fh, u32 classid, unsigned long cl) +static void rsvp_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct rsvp_filter *f = fh; - if (f && f->res.classid == classid) - f->res.class = cl; + if (f && f->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &f->res, base); + else + __tcf_unbind_filter(q, &f->res); + } } static struct tcf_proto_ops RSVP_OPS __read_mostly = { diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c index e573e5a5c794..3d4a1280352f 100644 --- a/net/sched/cls_tcindex.c +++ b/net/sched/cls_tcindex.c @@ -654,12 +654,17 @@ nla_put_failure: return -1; } -static void tcindex_bind_class(void *fh, u32 classid, unsigned long cl) +static void tcindex_bind_class(void *fh, u32 classid, unsigned long cl, + void *q, unsigned long base) { struct tcindex_filter_result *r = fh; - if (r && r->res.classid == classid) - r->res.class = cl; + if (r && r->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &r->res, base); + else + __tcf_unbind_filter(q, &r->res); + } } static struct tcf_proto_ops cls_tcindex_ops __read_mostly = { diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index a0e6fac613de..e15ff335953d 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -1255,12 +1255,17 @@ static int u32_reoffload(struct tcf_proto *tp, bool add, flow_setup_cb_t *cb, return 0; } -static void u32_bind_class(void *fh, u32 classid, unsigned long cl) +static void u32_bind_class(void *fh, u32 classid, unsigned long cl, void *q, + unsigned long base) { struct tc_u_knode *n = fh; - if (n && n->res.classid == classid) - n->res.class = cl; + if (n && n->res.classid == classid) { + if (cl) + __tcf_bind_filter(q, &n->res, base); + else + __tcf_unbind_filter(q, &n->res); + } } static int u32_dump(struct net *net, struct tcf_proto *tp, void *fh, diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 1047825d9f48..943ad3425380 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -1891,8 +1891,9 @@ static int tclass_del_notify(struct net *net, struct tcf_bind_args { struct tcf_walker w; - u32 classid; + unsigned long base; unsigned long cl; + u32 classid; }; static int tcf_node_bind(struct tcf_proto *tp, void *n, struct tcf_walker *arg) @@ -1903,7 +1904,7 @@ static int tcf_node_bind(struct tcf_proto *tp, void *n, struct tcf_walker *arg) struct Qdisc *q = tcf_block_q(tp->chain->block); sch_tree_lock(q); - tp->ops->bind_class(n, a->classid, a->cl); + tp->ops->bind_class(n, a->classid, a->cl, q, a->base); sch_tree_unlock(q); } return 0; @@ -1936,6 +1937,7 @@ static void tc_bind_tclass(struct Qdisc *q, u32 portid, u32 clid, arg.w.fn = tcf_node_bind; arg.classid = clid; + arg.base = cl; arg.cl = new_cl; tp->ops->walk(tp, &arg.w, true); } -- cgit v1.2.3-71-gd317 From 3b33583265ed3b0ae76eddbabf9d038b4076d1a9 Mon Sep 17 00:00:00 2001 From: Steffen Klassert Date: Sat, 25 Jan 2020 11:26:42 +0100 Subject: net: Add fraglist GRO/GSO feature flags This adds new Fraglist GRO/GSO feature flags. They will be used to configure fraglist GRO/GSO what will be implemented with some followup paches. Signed-off-by: Steffen Klassert Reviewed-by: Willem de Bruijn Signed-off-by: David S. Miller --- include/linux/netdev_features.h | 6 +++++- include/linux/netdevice.h | 1 + include/linux/skbuff.h | 2 ++ net/ethtool/common.c | 1 + 4 files changed, 9 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index 4b19c544c59a..b239507da2a0 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -53,8 +53,9 @@ enum { NETIF_F_GSO_ESP_BIT, /* ... ESP with TSO */ NETIF_F_GSO_UDP_BIT, /* ... UFO, deprecated except tuntap */ NETIF_F_GSO_UDP_L4_BIT, /* ... UDP payload GSO (not UFO) */ + NETIF_F_GSO_FRAGLIST_BIT, /* ... Fraglist GSO */ /**/NETIF_F_GSO_LAST = /* last bit, see GSO_MASK */ - NETIF_F_GSO_UDP_L4_BIT, + NETIF_F_GSO_FRAGLIST_BIT, NETIF_F_FCOE_CRC_BIT, /* FCoE CRC32 */ NETIF_F_SCTP_CRC_BIT, /* SCTP checksum offload */ @@ -80,6 +81,7 @@ enum { NETIF_F_GRO_HW_BIT, /* Hardware Generic receive offload */ NETIF_F_HW_TLS_RECORD_BIT, /* Offload TLS record */ + NETIF_F_GRO_FRAGLIST_BIT, /* Fraglist GRO */ /* * Add your fresh new feature above and remember to update @@ -150,6 +152,8 @@ enum { #define NETIF_F_GSO_UDP_L4 __NETIF_F(GSO_UDP_L4) #define NETIF_F_HW_TLS_TX __NETIF_F(HW_TLS_TX) #define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX) +#define NETIF_F_GRO_FRAGLIST __NETIF_F(GRO_FRAGLIST) +#define NETIF_F_GSO_FRAGLIST __NETIF_F(GSO_FRAGLIST) /* Finds the next feature with the highest number of the range of start till 0. */ diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 78e9c6c1b131..fcc76b890f50 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4570,6 +4570,7 @@ static inline bool net_gso_ok(netdev_features_t features, int gso_type) BUILD_BUG_ON(SKB_GSO_ESP != (NETIF_F_GSO_ESP >> NETIF_F_GSO_SHIFT)); BUILD_BUG_ON(SKB_GSO_UDP != (NETIF_F_GSO_UDP >> NETIF_F_GSO_SHIFT)); BUILD_BUG_ON(SKB_GSO_UDP_L4 != (NETIF_F_GSO_UDP_L4 >> NETIF_F_GSO_SHIFT)); + BUILD_BUG_ON(SKB_GSO_FRAGLIST != (NETIF_F_GSO_FRAGLIST >> NETIF_F_GSO_SHIFT)); return (features & feature) == feature; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 26beae7db264..23aaaf08e1e9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -592,6 +592,8 @@ enum { SKB_GSO_UDP = 1 << 16, SKB_GSO_UDP_L4 = 1 << 17, + + SKB_GSO_FRAGLIST = 1 << 18, }; #if BITS_PER_LONG > 32 diff --git a/net/ethtool/common.c b/net/ethtool/common.c index e621b1694d2f..c7b8956c3827 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -59,6 +59,7 @@ const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] = { [NETIF_F_HW_TLS_RECORD_BIT] = "tls-hw-record", [NETIF_F_HW_TLS_TX_BIT] = "tls-hw-tx-offload", [NETIF_F_HW_TLS_RX_BIT] = "tls-hw-rx-offload", + [NETIF_F_GRO_FRAGLIST_BIT] = "rx-gro-list", }; const char -- cgit v1.2.3-71-gd317 From 1a3c998f3a27ab6ecf56bdbb17e27e55fd6d47cd Mon Sep 17 00:00:00 2001 From: Steffen Klassert Date: Sat, 25 Jan 2020 11:26:43 +0100 Subject: net: Add a netdev software feature set that defaults to off. The previous patch added the NETIF_F_GRO_FRAGLIST feature. This is a software feature that should default to off. Current software features default to on, so add a new feature set that defaults to off. Signed-off-by: Steffen Klassert Reviewed-by: Willem de Bruijn Signed-off-by: David S. Miller --- include/linux/netdev_features.h | 3 +++ net/core/dev.c | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index b239507da2a0..34d050bb1ae6 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -230,6 +230,9 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start) /* changeable features with no special hardware requirements */ #define NETIF_F_SOFT_FEATURES (NETIF_F_GSO | NETIF_F_GRO) +/* Changeable features with no special hardware requirements that defaults to off. */ +#define NETIF_F_SOFT_FEATURES_OFF NETIF_F_GRO_FRAGLIST + #define NETIF_F_VLAN_FEATURES (NETIF_F_HW_VLAN_CTAG_FILTER | \ NETIF_F_HW_VLAN_CTAG_RX | \ NETIF_F_HW_VLAN_CTAG_TX | \ diff --git a/net/core/dev.c b/net/core/dev.c index c806b078097b..a3b154a4b4f9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9283,7 +9283,7 @@ int register_netdevice(struct net_device *dev) /* Transfer changeable features to wanted_features and enable * software offloads (GSO and GRO). */ - dev->hw_features |= NETIF_F_SOFT_FEATURES; + dev->hw_features |= (NETIF_F_SOFT_FEATURES | NETIF_F_SOFT_FEATURES_OFF); dev->features |= NETIF_F_SOFT_FEATURES; if (dev->netdev_ops->ndo_udp_tunnel_add) { -- cgit v1.2.3-71-gd317 From 3a1296a38d0cf62bffb9a03c585cbd5dbf15d596 Mon Sep 17 00:00:00 2001 From: Steffen Klassert Date: Sat, 25 Jan 2020 11:26:44 +0100 Subject: net: Support GRO/GSO fraglist chaining. This patch adds the core functions to chain/unchain GSO skbs at the frag_list pointer. This also adds a new GSO type SKB_GSO_FRAGLIST and a is_flist flag to napi_gro_cb which indicates that this flow will be GROed by fraglist chaining. Signed-off-by: Steffen Klassert Reviewed-by: Willem de Bruijn Signed-off-by: David S. Miller --- include/linux/netdevice.h | 4 ++- include/linux/skbuff.h | 2 ++ net/core/dev.c | 2 +- net/core/skbuff.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 97 insertions(+), 2 deletions(-) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index fcc76b890f50..20445f94eb1c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2326,7 +2326,8 @@ struct napi_gro_cb { /* Number of gro_receive callbacks this packet already went through */ u8 recursion_counter:4; - /* 1 bit hole */ + /* GRO is done by frag_list pointer chaining. */ + u8 is_flist:1; /* used to support CHECKSUM_COMPLETE for tunneling protocols */ __wsum csum; @@ -2694,6 +2695,7 @@ struct net_device *dev_get_by_napi_id(unsigned int napi_id); int netdev_get_name(struct net *net, char *name, int ifindex); int dev_restart(struct net_device *dev); int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb); +int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb); static inline unsigned int skb_gro_offset(const struct sk_buff *skb) { diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 23aaaf08e1e9..3d13a4b717e9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3535,6 +3535,8 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet); bool skb_gso_validate_network_len(const struct sk_buff *skb, unsigned int mtu); bool skb_gso_validate_mac_len(const struct sk_buff *skb, unsigned int len); struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features); +struct sk_buff *skb_segment_list(struct sk_buff *skb, netdev_features_t features, + unsigned int offset); struct sk_buff *skb_vlan_untag(struct sk_buff *skb); int skb_ensure_writable(struct sk_buff *skb, int write_len); int __skb_vlan_pop(struct sk_buff *skb, u16 *vlan_tci); diff --git a/net/core/dev.c b/net/core/dev.c index a3b154a4b4f9..ce8900dbd9ea 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3249,7 +3249,7 @@ struct sk_buff *__skb_gso_segment(struct sk_buff *skb, segs = skb_mac_gso_segment(skb, features); - if (unlikely(skb_needs_check(skb, tx_path) && !IS_ERR(segs))) + if (segs != skb && unlikely(skb_needs_check(skb, tx_path) && !IS_ERR(segs))) skb_warn_bad_offload(skb); return segs; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 48a7029529c9..864cb9e9622f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -3639,6 +3639,97 @@ static inline skb_frag_t skb_head_frag_to_page_desc(struct sk_buff *frag_skb) return head_frag; } +struct sk_buff *skb_segment_list(struct sk_buff *skb, + netdev_features_t features, + unsigned int offset) +{ + struct sk_buff *list_skb = skb_shinfo(skb)->frag_list; + unsigned int tnl_hlen = skb_tnl_header_len(skb); + unsigned int delta_truesize = 0; + unsigned int delta_len = 0; + struct sk_buff *tail = NULL; + struct sk_buff *nskb; + + skb_push(skb, -skb_network_offset(skb) + offset); + + skb_shinfo(skb)->frag_list = NULL; + + do { + nskb = list_skb; + list_skb = list_skb->next; + + if (!tail) + skb->next = nskb; + else + tail->next = nskb; + + tail = nskb; + + delta_len += nskb->len; + delta_truesize += nskb->truesize; + + skb_push(nskb, -skb_network_offset(nskb) + offset); + + __copy_skb_header(nskb, skb); + + skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb)); + skb_copy_from_linear_data_offset(skb, -tnl_hlen, + nskb->data - tnl_hlen, + offset + tnl_hlen); + + if (skb_needs_linearize(nskb, features) && + __skb_linearize(nskb)) + goto err_linearize; + + } while (list_skb); + + skb->truesize = skb->truesize - delta_truesize; + skb->data_len = skb->data_len - delta_len; + skb->len = skb->len - delta_len; + + skb_gso_reset(skb); + + skb->prev = tail; + + if (skb_needs_linearize(skb, features) && + __skb_linearize(skb)) + goto err_linearize; + + skb_get(skb); + + return skb; + +err_linearize: + kfree_skb_list(skb->next); + skb->next = NULL; + return ERR_PTR(-ENOMEM); +} +EXPORT_SYMBOL_GPL(skb_segment_list); + +int skb_gro_receive_list(struct sk_buff *p, struct sk_buff *skb) +{ + if (unlikely(p->len + skb->len >= 65536)) + return -E2BIG; + + if (NAPI_GRO_CB(p)->last == p) + skb_shinfo(p)->frag_list = skb; + else + NAPI_GRO_CB(p)->last->next = skb; + + skb_pull(skb, skb_gro_offset(skb)); + + NAPI_GRO_CB(p)->last = skb; + NAPI_GRO_CB(p)->count++; + p->data_len += skb->len; + p->truesize += skb->truesize; + p->len += skb->len; + + NAPI_GRO_CB(skb)->same_flow = 1; + + return 0; +} +EXPORT_SYMBOL_GPL(skb_gro_receive_list); + /** * skb_segment - Perform protocol segmentation on skb. * @head_skb: buffer to segment -- cgit v1.2.3-71-gd317 From 9fd1ff5d2ac7181844735806b0a703c942365291 Mon Sep 17 00:00:00 2001 From: Steffen Klassert Date: Sat, 25 Jan 2020 11:26:45 +0100 Subject: udp: Support UDP fraglist GRO/GSO. This patch extends UDP GRO to support fraglist GRO/GSO by using the previously introduced infrastructure. If the feature is enabled, all UDP packets are going to fraglist GRO (local input and forward). After validating the csum, we mark ip_summed as CHECKSUM_UNNECESSARY for fraglist GRO packets to make sure that the csum is not touched. Signed-off-by: Steffen Klassert Reviewed-by: Willem de Bruijn Signed-off-by: David S. Miller --- include/net/udp.h | 2 +- net/ipv4/udp_offload.c | 104 ++++++++++++++++++++++++++++++++++++++----------- net/ipv6/udp_offload.c | 27 ++++++++++++- 3 files changed, 107 insertions(+), 26 deletions(-) (limited to 'include') diff --git a/include/net/udp.h b/include/net/udp.h index bad74f780831..44e0e52b585c 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -167,7 +167,7 @@ typedef struct sock *(*udp_lookup_t)(struct sk_buff *skb, __be16 sport, __be16 dport); struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb, - struct udphdr *uh, udp_lookup_t lookup); + struct udphdr *uh, struct sock *sk); int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup); struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb, diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index b25e42100ceb..1a98583a79f4 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -184,6 +184,20 @@ out_unlock: } EXPORT_SYMBOL(skb_udp_tunnel_segment); +static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb, + netdev_features_t features) +{ + unsigned int mss = skb_shinfo(skb)->gso_size; + + skb = skb_segment_list(skb, features, skb_mac_header_len(skb)); + if (IS_ERR(skb)) + return skb; + + udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss); + + return skb; +} + struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb, netdev_features_t features) { @@ -196,6 +210,9 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb, __sum16 check; __be16 newlen; + if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) + return __udp_gso_segment_list(gso_skb, features); + mss = skb_shinfo(gso_skb)->gso_size; if (gso_skb->len <= sizeof(*uh) + mss) return ERR_PTR(-EINVAL); @@ -354,6 +371,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, struct udphdr *uh2; struct sk_buff *p; unsigned int ulen; + int ret = 0; /* requires non zero csum, for symmetry with GSO */ if (!uh->check) { @@ -369,7 +387,6 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, } /* pull encapsulating udp header */ skb_gro_pull(skb, sizeof(struct udphdr)); - skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr)); list_for_each_entry(p, head, list) { if (!NAPI_GRO_CB(p)->same_flow) @@ -383,14 +400,40 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, continue; } + if (NAPI_GRO_CB(skb)->is_flist != NAPI_GRO_CB(p)->is_flist) { + NAPI_GRO_CB(skb)->flush = 1; + return p; + } + /* Terminate the flow on len mismatch or if it grow "too much". * Under small packet flood GRO count could elsewhere grow a lot * leading to excessive truesize values. * On len mismatch merge the first packet shorter than gso_size, * otherwise complete the GRO packet. */ - if (ulen > ntohs(uh2->len) || skb_gro_receive(p, skb) || - ulen != ntohs(uh2->len) || + if (ulen > ntohs(uh2->len)) { + pp = p; + } else { + if (NAPI_GRO_CB(skb)->is_flist) { + if (!pskb_may_pull(skb, skb_gro_offset(skb))) { + NAPI_GRO_CB(skb)->flush = 1; + return NULL; + } + if ((skb->ip_summed != p->ip_summed) || + (skb->csum_level != p->csum_level)) { + NAPI_GRO_CB(skb)->flush = 1; + return NULL; + } + ret = skb_gro_receive_list(p, skb); + } else { + skb_gro_postpull_rcsum(skb, uh, + sizeof(struct udphdr)); + + ret = skb_gro_receive(p, skb); + } + } + + if (ret || ulen != ntohs(uh2->len) || NAPI_GRO_CB(p)->count >= UDP_GRO_CNT_MAX) pp = p; @@ -401,36 +444,29 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, return NULL; } -INDIRECT_CALLABLE_DECLARE(struct sock *udp6_lib_lookup_skb(struct sk_buff *skb, - __be16 sport, __be16 dport)); struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb, - struct udphdr *uh, udp_lookup_t lookup) + struct udphdr *uh, struct sock *sk) { struct sk_buff *pp = NULL; struct sk_buff *p; struct udphdr *uh2; unsigned int off = skb_gro_offset(skb); int flush = 1; - struct sock *sk; - rcu_read_lock(); - sk = INDIRECT_CALL_INET(lookup, udp6_lib_lookup_skb, - udp4_lib_lookup_skb, skb, uh->source, uh->dest); - if (!sk) - goto out_unlock; + if (skb->dev->features & NETIF_F_GRO_FRAGLIST) + NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled: 1; - if (udp_sk(sk)->gro_enabled) { + if ((sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist) { pp = call_gro_receive(udp_gro_receive_segment, head, skb); - rcu_read_unlock(); return pp; } - if (NAPI_GRO_CB(skb)->encap_mark || + if (!sk || NAPI_GRO_CB(skb)->encap_mark || (skb->ip_summed != CHECKSUM_PARTIAL && NAPI_GRO_CB(skb)->csum_cnt == 0 && !NAPI_GRO_CB(skb)->csum_valid) || !udp_sk(sk)->gro_receive) - goto out_unlock; + goto out; /* mark that this skb passed once through the tunnel gro layer */ NAPI_GRO_CB(skb)->encap_mark = 1; @@ -457,8 +493,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb, skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr)); pp = call_gro_receive_sk(udp_sk(sk)->gro_receive, sk, head, skb); -out_unlock: - rcu_read_unlock(); +out: skb_gro_flush_final(skb, pp, flush); return pp; } @@ -468,8 +503,10 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb) { struct udphdr *uh = udp_gro_udphdr(skb); + struct sk_buff *pp; + struct sock *sk; - if (unlikely(!uh) || !static_branch_unlikely(&udp_encap_needed_key)) + if (unlikely(!uh)) goto flush; /* Don't bother verifying checksum if we're going to flush anyway. */ @@ -484,7 +521,11 @@ struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb) inet_gro_compute_pseudo); skip: NAPI_GRO_CB(skb)->is_ipv6 = 0; - return udp_gro_receive(head, skb, uh, udp4_lib_lookup_skb); + rcu_read_lock(); + sk = static_branch_unlikely(&udp_encap_needed_key) ? udp4_lib_lookup_skb(skb, uh->source, uh->dest) : NULL; + pp = udp_gro_receive(head, skb, uh, sk); + rcu_read_unlock(); + return pp; flush: NAPI_GRO_CB(skb)->flush = 1; @@ -517,9 +558,7 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff, rcu_read_lock(); sk = INDIRECT_CALL_INET(lookup, udp6_lib_lookup_skb, udp4_lib_lookup_skb, skb, uh->source, uh->dest); - if (sk && udp_sk(sk)->gro_enabled) { - err = udp_gro_complete_segment(skb); - } else if (sk && udp_sk(sk)->gro_complete) { + if (sk && udp_sk(sk)->gro_complete) { skb_shinfo(skb)->gso_type = uh->check ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL; @@ -529,6 +568,8 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff, skb->encapsulation = 1; err = udp_sk(sk)->gro_complete(sk, skb, nhoff + sizeof(struct udphdr)); + } else { + err = udp_gro_complete_segment(skb); } rcu_read_unlock(); @@ -544,6 +585,23 @@ INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) const struct iphdr *iph = ip_hdr(skb); struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + if (NAPI_GRO_CB(skb)->is_flist) { + uh->len = htons(skb->len - nhoff); + + skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); + skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; + + if (skb->ip_summed == CHECKSUM_UNNECESSARY) { + if (skb->csum_level < SKB_MAX_CSUM_LEVEL) + skb->csum_level++; + } else { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 0; + } + + return 0; + } + if (uh->check) uh->check = ~udp_v4_check(skb->len - nhoff, iph->saddr, iph->daddr, 0); diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index f0d5fc27d0b5..584157a07759 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -115,8 +115,10 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) { struct udphdr *uh = udp_gro_udphdr(skb); + struct sk_buff *pp; + struct sock *sk; - if (unlikely(!uh) || !static_branch_unlikely(&udpv6_encap_needed_key)) + if (unlikely(!uh)) goto flush; /* Don't bother verifying checksum if we're going to flush anyway. */ @@ -132,7 +134,11 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) skip: NAPI_GRO_CB(skb)->is_ipv6 = 1; - return udp_gro_receive(head, skb, uh, udp6_lib_lookup_skb); + rcu_read_lock(); + sk = static_branch_unlikely(&udpv6_encap_needed_key) ? udp6_lib_lookup_skb(skb, uh->source, uh->dest) : NULL; + pp = udp_gro_receive(head, skb, uh, sk); + rcu_read_unlock(); + return pp; flush: NAPI_GRO_CB(skb)->flush = 1; @@ -144,6 +150,23 @@ INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) const struct ipv6hdr *ipv6h = ipv6_hdr(skb); struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + if (NAPI_GRO_CB(skb)->is_flist) { + uh->len = htons(skb->len - nhoff); + + skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); + skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; + + if (skb->ip_summed == CHECKSUM_UNNECESSARY) { + if (skb->csum_level < SKB_MAX_CSUM_LEVEL) + skb->csum_level++; + } else { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = 0; + } + + return 0; + } + if (uh->check) uh->check = ~udp_v6_check(skb->len - nhoff, &ipv6h->saddr, &ipv6h->daddr, 0); -- cgit v1.2.3-71-gd317 From 93642e14bd50e59b11cf6389ce3fc243e932777a Mon Sep 17 00:00:00 2001 From: Jiri Pirko Date: Sat, 25 Jan 2020 12:17:08 +0100 Subject: net: introduce dev_net notifier register/unregister variants Introduce dev_net variants of netdev notifier register/unregister functions and allow per-net notifier to follow the netdevice into the namespace it is moved to. Signed-off-by: Jiri Pirko Signed-off-by: David S. Miller --- include/linux/netdevice.h | 17 +++++++++++++++++ net/core/dev.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 63 insertions(+) (limited to 'include') diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 20445f94eb1c..4626188a754b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -939,6 +939,11 @@ struct netdev_name_node { int netdev_name_node_alt_create(struct net_device *dev, const char *name); int netdev_name_node_alt_destroy(struct net_device *dev, const char *name); +struct netdev_net_notifier { + struct list_head list; + struct notifier_block *nb; +}; + /* * This structure defines the management hooks for network devices. * The following hooks can be defined; unless noted otherwise, they are @@ -1793,6 +1798,10 @@ enum netdev_priv_flags { * * @wol_enabled: Wake-on-LAN is enabled * + * @net_notifier_list: List of per-net netdev notifier block + * that follow this device when it is moved + * to another network namespace. + * * FIXME: cleanup struct net_device such that network protocol info * moves out. */ @@ -2085,6 +2094,8 @@ struct net_device { struct lock_class_key addr_list_lock_key; bool proto_down; unsigned wol_enabled:1; + + struct list_head net_notifier_list; }; #define to_net_dev(d) container_of(d, struct net_device, dev) @@ -2529,6 +2540,12 @@ int unregister_netdevice_notifier(struct notifier_block *nb); int register_netdevice_notifier_net(struct net *net, struct notifier_block *nb); int unregister_netdevice_notifier_net(struct net *net, struct notifier_block *nb); +int register_netdevice_notifier_dev_net(struct net_device *dev, + struct notifier_block *nb, + struct netdev_net_notifier *nn); +int unregister_netdevice_notifier_dev_net(struct net_device *dev, + struct notifier_block *nb, + struct netdev_net_notifier *nn); struct netdev_notifier_info { struct net_device *dev; diff --git a/net/core/dev.c b/net/core/dev.c index b521b509a653..38bc35da39f7 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1874,6 +1874,48 @@ int unregister_netdevice_notifier_net(struct net *net, } EXPORT_SYMBOL(unregister_netdevice_notifier_net); +int register_netdevice_notifier_dev_net(struct net_device *dev, + struct notifier_block *nb, + struct netdev_net_notifier *nn) +{ + int err; + + rtnl_lock(); + err = __register_netdevice_notifier_net(dev_net(dev), nb, false); + if (!err) { + nn->nb = nb; + list_add(&nn->list, &dev->net_notifier_list); + } + rtnl_unlock(); + return err; +} +EXPORT_SYMBOL(register_netdevice_notifier_dev_net); + +int unregister_netdevice_notifier_dev_net(struct net_device *dev, + struct notifier_block *nb, + struct netdev_net_notifier *nn) +{ + int err; + + rtnl_lock(); + list_del(&nn->list); + err = __unregister_netdevice_notifier_net(dev_net(dev), nb); + rtnl_unlock(); + return err; +} +EXPORT_SYMBOL(unregister_netdevice_notifier_dev_net); + +static void move_netdevice_notifiers_dev_net(struct net_device *dev, + struct net *net) +{ + struct netdev_net_notifier *nn; + + list_for_each_entry(nn, &dev->net_notifier_list, list) { + __unregister_netdevice_notifier_net(dev_net(dev), nn->nb); + __register_netdevice_notifier_net(net, nn->nb, true); + } +} + /** * call_netdevice_notifiers_info - call all network notifier blocks * @val: value passed unmodified to notifier function @@ -9786,6 +9828,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, INIT_LIST_HEAD(&dev->adj_list.lower); INIT_LIST_HEAD(&dev->ptype_all); INIT_LIST_HEAD(&dev->ptype_specific); + INIT_LIST_HEAD(&dev->net_notifier_list); #ifdef CONFIG_NET_SCHED hash_init(dev->qdisc_hash); #endif @@ -10049,6 +10092,9 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char kobject_uevent(&dev->dev.kobj, KOBJ_REMOVE); netdev_adjacent_del_links(dev); + /* Move per-net netdevice notifiers that are following the netdevice */ + move_netdevice_notifiers_dev_net(dev, net); + /* Actually switch the network namespace */ dev_net_set(dev, net); dev->ifindex = new_ifindex; -- cgit v1.2.3-71-gd317 From a85dd3a5170c8812cd835ea968ccadf0ebf1648e Mon Sep 17 00:00:00 2001 From: Heiner Kallweit Date: Sat, 25 Jan 2020 13:42:14 +0100 Subject: net: remove eth_change_mtu All usage of this function was removed three years ago, and the function was marked as deprecated: a52ad514fdf3 ("net: deprecate eth_change_mtu, remove usage") So I think we can remove it now. Signed-off-by: Heiner Kallweit Signed-off-by: David S. Miller --- include/linux/etherdevice.h | 1 - net/ethernet/eth.c | 16 ---------------- 2 files changed, 17 deletions(-) (limited to 'include') diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h index f6564b572d77..8801f1f986e5 100644 --- a/include/linux/etherdevice.h +++ b/include/linux/etherdevice.h @@ -43,7 +43,6 @@ __be16 eth_header_parse_protocol(const struct sk_buff *skb); int eth_prepare_mac_addr_change(struct net_device *dev, void *p); void eth_commit_mac_addr_change(struct net_device *dev, void *p); int eth_mac_addr(struct net_device *dev, void *p); -int eth_change_mtu(struct net_device *dev, int new_mtu); int eth_validate_addr(struct net_device *dev); struct net_device *alloc_etherdev_mqs(int sizeof_priv, unsigned int txqs, diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c index 9040fe55e0f5..c8b903302ff2 100644 --- a/net/ethernet/eth.c +++ b/net/ethernet/eth.c @@ -335,22 +335,6 @@ int eth_mac_addr(struct net_device *dev, void *p) } EXPORT_SYMBOL(eth_mac_addr); -/** - * eth_change_mtu - set new MTU size - * @dev: network device - * @new_mtu: new Maximum Transfer Unit - * - * Allow changing MTU size. Needs to be overridden for devices - * supporting jumbo frames. - */ -int eth_change_mtu(struct net_device *dev, int new_mtu) -{ - netdev_warn(dev, "%s is deprecated\n", __func__); - dev->mtu = new_mtu; - return 0; -} -EXPORT_SYMBOL(eth_change_mtu); - int eth_validate_addr(struct net_device *dev) { if (!is_valid_ether_addr(dev->dev_addr)) -- cgit v1.2.3-71-gd317 From 6a94b8ccf6b77f005ab1b36a878e1d81df0c033e Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:04 +0100 Subject: ethtool: provide message mask with DEBUG_GET request Implement DEBUG_GET request to get debugging settings for a device. At the moment, only message mask corresponding to message level as reported by ETHTOOL_GMSGLVL ioctl request is provided. (It is called message level in ioctl interface but almost all drivers interpret it as a bit mask.) As part of the implementation, provide symbolic names for message mask bits as ETH_SS_MSG_CLASSES string set. Signed-off-by: Michal Kubecek Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 34 +++++++++++- include/linux/netdevice.h | 56 +++++++++++++------ include/uapi/linux/ethtool.h | 2 + include/uapi/linux/ethtool_netlink.h | 14 +++++ net/ethtool/Makefile | 2 +- net/ethtool/common.c | 19 +++++++ net/ethtool/common.h | 1 + net/ethtool/debug.c | 80 ++++++++++++++++++++++++++++ net/ethtool/netlink.c | 8 +++ net/ethtool/netlink.h | 1 + net/ethtool/strset.c | 5 ++ 11 files changed, 205 insertions(+), 17 deletions(-) create mode 100644 net/ethtool/debug.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index c60afba69e3c..76a411d3102d 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -185,6 +185,7 @@ Userspace to kernel: ``ETHTOOL_MSG_LINKMODES_GET`` get link modes info ``ETHTOOL_MSG_LINKMODES_SET`` set link modes info ``ETHTOOL_MSG_LINKSTATE_GET`` get link state + ``ETHTOOL_MSG_DEBUG_GET`` get debugging settings ===================================== ================================ Kernel to userspace: @@ -196,6 +197,7 @@ Kernel to userspace: ``ETHTOOL_MSG_LINKMODES_GET_REPLY`` link modes info ``ETHTOOL_MSG_LINKMODES_NTF`` link modes notification ``ETHTOOL_MSG_LINKSTATE_GET_REPLY`` link state info + ``ETHTOOL_MSG_DEBUG_GET_REPLY`` debugging settings ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device @@ -423,6 +425,36 @@ define their own handler. devices supporting the request). +DEBUG_GET +========= + +Requests debugging settings of a device. At the moment, only message mask is +provided. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_DEBUG_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_DEBUG_HEADER`` nested reply header + ``ETHTOOL_A_DEBUG_MSGMASK`` bitset message mask + ==================================== ====== ========================== + +The message mask (``ETHTOOL_A_DEBUG_MSGMASK``) is equal to message level as +provided by ``ETHTOOL_GMSGLVL`` and set by ``ETHTOOL_SMSGLVL`` in ioctl +interface. While it is called message level there for historical reasons, most +drivers and almost all newer drivers use it as a mask of enabled message +classes (represented by ``NETIF_MSG_*`` constants); therefore netlink +interface follows its actual use in practice. + +``DEBUG_GET`` allows dump requests (kernel returns reply messages for all +devices supporting the request). + + Request translation =================== @@ -441,7 +473,7 @@ have their netlink replacement yet. ``ETHTOOL_GREGS`` n/a ``ETHTOOL_GWOL`` n/a ``ETHTOOL_SWOL`` n/a - ``ETHTOOL_GMSGLVL`` n/a + ``ETHTOOL_GMSGLVL`` ``ETHTOOL_MSG_DEBUG_GET`` ``ETHTOOL_SMSGLVL`` n/a ``ETHTOOL_NWAY_RST`` n/a ``ETHTOOL_GLINK`` ``ETHTOOL_MSG_LINKSTATE_GET`` diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 4626188a754b..a9c6b5c61d27 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3913,22 +3913,48 @@ void netif_device_attach(struct net_device *dev); */ enum { - NETIF_MSG_DRV = 0x0001, - NETIF_MSG_PROBE = 0x0002, - NETIF_MSG_LINK = 0x0004, - NETIF_MSG_TIMER = 0x0008, - NETIF_MSG_IFDOWN = 0x0010, - NETIF_MSG_IFUP = 0x0020, - NETIF_MSG_RX_ERR = 0x0040, - NETIF_MSG_TX_ERR = 0x0080, - NETIF_MSG_TX_QUEUED = 0x0100, - NETIF_MSG_INTR = 0x0200, - NETIF_MSG_TX_DONE = 0x0400, - NETIF_MSG_RX_STATUS = 0x0800, - NETIF_MSG_PKTDATA = 0x1000, - NETIF_MSG_HW = 0x2000, - NETIF_MSG_WOL = 0x4000, + NETIF_MSG_DRV_BIT, + NETIF_MSG_PROBE_BIT, + NETIF_MSG_LINK_BIT, + NETIF_MSG_TIMER_BIT, + NETIF_MSG_IFDOWN_BIT, + NETIF_MSG_IFUP_BIT, + NETIF_MSG_RX_ERR_BIT, + NETIF_MSG_TX_ERR_BIT, + NETIF_MSG_TX_QUEUED_BIT, + NETIF_MSG_INTR_BIT, + NETIF_MSG_TX_DONE_BIT, + NETIF_MSG_RX_STATUS_BIT, + NETIF_MSG_PKTDATA_BIT, + NETIF_MSG_HW_BIT, + NETIF_MSG_WOL_BIT, + + /* When you add a new bit above, update netif_msg_class_names array + * in net/ethtool/common.c + */ + NETIF_MSG_CLASS_COUNT, }; +/* Both ethtool_ops interface and internal driver implementation use u32 */ +static_assert(NETIF_MSG_CLASS_COUNT <= 32); + +#define __NETIF_MSG_BIT(bit) ((u32)1 << (bit)) +#define __NETIF_MSG(name) __NETIF_MSG_BIT(NETIF_MSG_ ## name ## _BIT) + +#define NETIF_MSG_DRV __NETIF_MSG(DRV) +#define NETIF_MSG_PROBE __NETIF_MSG(PROBE) +#define NETIF_MSG_LINK __NETIF_MSG(LINK) +#define NETIF_MSG_TIMER __NETIF_MSG(TIMER) +#define NETIF_MSG_IFDOWN __NETIF_MSG(IFDOWN) +#define NETIF_MSG_IFUP __NETIF_MSG(IFUP) +#define NETIF_MSG_RX_ERR __NETIF_MSG(RX_ERR) +#define NETIF_MSG_TX_ERR __NETIF_MSG(TX_ERR) +#define NETIF_MSG_TX_QUEUED __NETIF_MSG(TX_QUEUED) +#define NETIF_MSG_INTR __NETIF_MSG(INTR) +#define NETIF_MSG_TX_DONE __NETIF_MSG(TX_DONE) +#define NETIF_MSG_RX_STATUS __NETIF_MSG(RX_STATUS) +#define NETIF_MSG_PKTDATA __NETIF_MSG(PKTDATA) +#define NETIF_MSG_HW __NETIF_MSG(HW) +#define NETIF_MSG_WOL __NETIF_MSG(WOL) #define netif_msg_drv(p) ((p)->msg_enable & NETIF_MSG_DRV) #define netif_msg_probe(p) ((p)->msg_enable & NETIF_MSG_PROBE) diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h index 116bcbf09c74..456fb2aa0fad 100644 --- a/include/uapi/linux/ethtool.h +++ b/include/uapi/linux/ethtool.h @@ -594,6 +594,7 @@ struct ethtool_pauseparam { * @ETH_SS_PHY_STATS: Statistic names, for use with %ETHTOOL_GPHYSTATS * @ETH_SS_PHY_TUNABLES: PHY tunable names * @ETH_SS_LINK_MODES: link mode names + * @ETH_SS_MSG_CLASSES: debug message class names */ enum ethtool_stringset { ETH_SS_TEST = 0, @@ -606,6 +607,7 @@ enum ethtool_stringset { ETH_SS_PHY_STATS, ETH_SS_PHY_TUNABLES, ETH_SS_LINK_MODES, + ETH_SS_MSG_CLASSES, /* add new constants above here */ ETH_SS_COUNT diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 02f82f42a889..0d39e04567cb 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -20,6 +20,7 @@ enum { ETHTOOL_MSG_LINKMODES_GET, ETHTOOL_MSG_LINKMODES_SET, ETHTOOL_MSG_LINKSTATE_GET, + ETHTOOL_MSG_DEBUG_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -35,6 +36,7 @@ enum { ETHTOOL_MSG_LINKMODES_GET_REPLY, ETHTOOL_MSG_LINKMODES_NTF, ETHTOOL_MSG_LINKSTATE_GET_REPLY, + ETHTOOL_MSG_DEBUG_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -195,6 +197,18 @@ enum { ETHTOOL_A_LINKSTATE_MAX = __ETHTOOL_A_LINKSTATE_CNT - 1 }; +/* DEBUG */ + +enum { + ETHTOOL_A_DEBUG_UNSPEC, + ETHTOOL_A_DEBUG_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_DEBUG_MSGMASK, /* bitset */ + + /* add new constants above here */ + __ETHTOOL_A_DEBUG_CNT, + ETHTOOL_A_DEBUG_MAX = __ETHTOOL_A_DEBUG_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index 9a1332fb0cc6..c120c820a4f5 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -5,4 +5,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ - linkstate.o + linkstate.o debug.o diff --git a/net/ethtool/common.c b/net/ethtool/common.c index c7b8956c3827..aa1183a65a76 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -171,6 +171,25 @@ const char link_mode_names[][ETH_GSTRING_LEN] = { }; static_assert(ARRAY_SIZE(link_mode_names) == __ETHTOOL_LINK_MODE_MASK_NBITS); +const char netif_msg_class_names[][ETH_GSTRING_LEN] = { + [NETIF_MSG_DRV_BIT] = "drv", + [NETIF_MSG_PROBE_BIT] = "probe", + [NETIF_MSG_LINK_BIT] = "link", + [NETIF_MSG_TIMER_BIT] = "timer", + [NETIF_MSG_IFDOWN_BIT] = "ifdown", + [NETIF_MSG_IFUP_BIT] = "ifup", + [NETIF_MSG_RX_ERR_BIT] = "rx_err", + [NETIF_MSG_TX_ERR_BIT] = "tx_err", + [NETIF_MSG_TX_QUEUED_BIT] = "tx_queued", + [NETIF_MSG_INTR_BIT] = "intr", + [NETIF_MSG_TX_DONE_BIT] = "tx_done", + [NETIF_MSG_RX_STATUS_BIT] = "rx_status", + [NETIF_MSG_PKTDATA_BIT] = "pktdata", + [NETIF_MSG_HW_BIT] = "hw", + [NETIF_MSG_WOL_BIT] = "wol", +}; +static_assert(ARRAY_SIZE(netif_msg_class_names) == NETIF_MSG_CLASS_COUNT); + /* return false if legacy contained non-0 deprecated fields * maxtxpkt/maxrxpkt. rest of ksettings always updated */ diff --git a/net/ethtool/common.h b/net/ethtool/common.h index 5c5f7dc90cd4..064c5c3aa990 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -19,6 +19,7 @@ tunable_strings[__ETHTOOL_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char link_mode_names[][ETH_GSTRING_LEN]; +extern const char netif_msg_class_names[][ETH_GSTRING_LEN]; int __ethtool_get_link(struct net_device *dev); diff --git a/net/ethtool/debug.c b/net/ethtool/debug.c new file mode 100644 index 000000000000..cc121a23be5f --- /dev/null +++ b/net/ethtool/debug.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" +#include "bitset.h" + +struct debug_req_info { + struct ethnl_req_info base; +}; + +struct debug_reply_data { + struct ethnl_reply_data base; + u32 msg_mask; +}; + +#define DEBUG_REPDATA(__reply_base) \ + container_of(__reply_base, struct debug_reply_data, base) + +static const struct nla_policy +debug_get_policy[ETHTOOL_A_DEBUG_MAX + 1] = { + [ETHTOOL_A_DEBUG_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_DEBUG_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_DEBUG_MSGMASK] = { .type = NLA_REJECT }, +}; + +static int debug_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct debug_reply_data *data = DEBUG_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + if (!dev->ethtool_ops->get_msglevel) + return -EOPNOTSUPP; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + data->msg_mask = dev->ethtool_ops->get_msglevel(dev); + ethnl_ops_complete(dev); + + return 0; +} + +static int debug_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct debug_reply_data *data = DEBUG_REPDATA(reply_base); + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + + return ethnl_bitset32_size(&data->msg_mask, NULL, NETIF_MSG_CLASS_COUNT, + netif_msg_class_names, compact); +} + +static int debug_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct debug_reply_data *data = DEBUG_REPDATA(reply_base); + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + + return ethnl_put_bitset32(skb, ETHTOOL_A_DEBUG_MSGMASK, &data->msg_mask, + NULL, NETIF_MSG_CLASS_COUNT, + netif_msg_class_names, compact); +} + +const struct ethnl_request_ops ethnl_debug_request_ops = { + .request_cmd = ETHTOOL_MSG_DEBUG_GET, + .reply_cmd = ETHTOOL_MSG_DEBUG_GET_REPLY, + .hdr_attr = ETHTOOL_A_DEBUG_HEADER, + .max_attr = ETHTOOL_A_DEBUG_MAX, + .req_info_size = sizeof(struct debug_req_info), + .reply_data_size = sizeof(struct debug_reply_data), + .request_policy = debug_get_policy, + + .prepare_data = debug_prepare_data, + .reply_size = debug_reply_size, + .fill_reply = debug_fill_reply, +}; diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 0af43bbdb9b2..bdaf583e392b 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -213,6 +213,7 @@ ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_LINKINFO_GET] = ðnl_linkinfo_request_ops, [ETHTOOL_MSG_LINKMODES_GET] = ðnl_linkmodes_request_ops, [ETHTOOL_MSG_LINKSTATE_GET] = ðnl_linkstate_request_ops, + [ETHTOOL_MSG_DEBUG_GET] = ðnl_debug_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -664,6 +665,13 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_DEBUG_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index da9d6521a4eb..9bd8ef671501 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -334,6 +334,7 @@ extern const struct ethnl_request_ops ethnl_strset_request_ops; extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; extern const struct ethnl_request_ops ethnl_linkmodes_request_ops; extern const struct ethnl_request_ops ethnl_linkstate_request_ops; +extern const struct ethnl_request_ops ethnl_debug_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c index 948d967f1eca..7a45c25355b8 100644 --- a/net/ethtool/strset.c +++ b/net/ethtool/strset.c @@ -50,6 +50,11 @@ static const struct strset_info info_template[] = { .count = __ETHTOOL_LINK_MODE_MASK_NBITS, .strings = link_mode_names, }, + [ETH_SS_MSG_CLASSES] = { + .per_dev = false, + .count = NETIF_MSG_CLASS_COUNT, + .strings = netif_msg_class_names, + }, }; struct strset_req_info { -- cgit v1.2.3-71-gd317 From e54d04e3afead22d8e7d6edaaac487a1205bac39 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:07 +0100 Subject: ethtool: set message mask with DEBUG_SET request Implement DEBUG_SET netlink request to set debugging settings for a device. At the moment, only message mask corresponding to message level as set by ETHTOOL_SMSGLVL ioctl request can be set. (It is called message level in ioctl interface but almost all drivers interpret it as a bit mask.) Signed-off-by: Michal Kubecek Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 20 ++++++++++- include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/debug.c | 53 ++++++++++++++++++++++++++++ net/ethtool/netlink.c | 5 +++ net/ethtool/netlink.h | 1 + 5 files changed, 79 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 76a411d3102d..58473c05319f 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -186,6 +186,7 @@ Userspace to kernel: ``ETHTOOL_MSG_LINKMODES_SET`` set link modes info ``ETHTOOL_MSG_LINKSTATE_GET`` get link state ``ETHTOOL_MSG_DEBUG_GET`` get debugging settings + ``ETHTOOL_MSG_DEBUG_SET`` set debugging settings ===================================== ================================ Kernel to userspace: @@ -455,6 +456,23 @@ interface follows its actual use in practice. devices supporting the request). +DEBUG_SET +========= + +Set or update debugging settings of a device. At the moment, only message mask +is supported. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_DEBUG_HEADER`` nested request header + ``ETHTOOL_A_DEBUG_MSGMASK`` bitset message mask + ==================================== ====== ========================== + +``ETHTOOL_A_DEBUG_MSGMASK`` bit set allows setting or modifying mask of +enabled debugging message types for the device. + + Request translation =================== @@ -474,7 +492,7 @@ have their netlink replacement yet. ``ETHTOOL_GWOL`` n/a ``ETHTOOL_SWOL`` n/a ``ETHTOOL_GMSGLVL`` ``ETHTOOL_MSG_DEBUG_GET`` - ``ETHTOOL_SMSGLVL`` n/a + ``ETHTOOL_SMSGLVL`` ``ETHTOOL_MSG_DEBUG_SET`` ``ETHTOOL_NWAY_RST`` n/a ``ETHTOOL_GLINK`` ``ETHTOOL_MSG_LINKSTATE_GET`` ``ETHTOOL_GEEPROM`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 0d39e04567cb..8f0b6fd277d5 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -21,6 +21,7 @@ enum { ETHTOOL_MSG_LINKMODES_SET, ETHTOOL_MSG_LINKSTATE_GET, ETHTOOL_MSG_DEBUG_GET, + ETHTOOL_MSG_DEBUG_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, diff --git a/net/ethtool/debug.c b/net/ethtool/debug.c index cc121a23be5f..6fc3ef8113c0 100644 --- a/net/ethtool/debug.c +++ b/net/ethtool/debug.c @@ -78,3 +78,56 @@ const struct ethnl_request_ops ethnl_debug_request_ops = { .reply_size = debug_reply_size, .fill_reply = debug_fill_reply, }; + +/* DEBUG_SET */ + +static const struct nla_policy +debug_set_policy[ETHTOOL_A_DEBUG_MAX + 1] = { + [ETHTOOL_A_DEBUG_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_DEBUG_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_DEBUG_MSGMASK] = { .type = NLA_NESTED }, +}; + +int ethnl_set_debug(struct sk_buff *skb, struct genl_info *info) +{ + struct nlattr *tb[ETHTOOL_A_DEBUG_MAX + 1]; + struct ethnl_req_info req_info = {}; + struct net_device *dev; + bool mod = false; + u32 msg_mask; + int ret; + + ret = nlmsg_parse(info->nlhdr, GENL_HDRLEN, tb, + ETHTOOL_A_DEBUG_MAX, debug_set_policy, + info->extack); + if (ret < 0) + return ret; + ret = ethnl_parse_header(&req_info, tb[ETHTOOL_A_DEBUG_HEADER], + genl_info_net(info), info->extack, true); + if (ret < 0) + return ret; + dev = req_info.dev; + if (!dev->ethtool_ops->get_msglevel || !dev->ethtool_ops->set_msglevel) + return -EOPNOTSUPP; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + msg_mask = dev->ethtool_ops->get_msglevel(dev); + ret = ethnl_update_bitset32(&msg_mask, NETIF_MSG_CLASS_COUNT, + tb[ETHTOOL_A_DEBUG_MSGMASK], + netif_msg_class_names, info->extack, &mod); + if (ret < 0 || !mod) + goto out_ops; + + dev->ethtool_ops->set_msglevel(dev, msg_mask); + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); + dev_put(dev); + return ret; +} diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index bdaf583e392b..bcdc42e0bd14 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -672,6 +672,11 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_DEBUG_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_debug, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 9bd8ef671501..772723e536e8 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -338,5 +338,6 @@ extern const struct ethnl_request_ops ethnl_debug_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); +int ethnl_set_debug(struct sk_buff *skb, struct genl_info *info); #endif /* _NET_ETHTOOL_NETLINK_H */ -- cgit v1.2.3-71-gd317 From 0bda7af39d2ba6ef83bd18e26bcb027f1630004e Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:10 +0100 Subject: ethtool: add DEBUG_NTF notification Send ETHTOOL_MSG_DEBUG_NTF notification message whenever debugging message mask for a device are modified using ETHTOOL_MSG_DEBUG_SET netlink message or ETHTOOL_SMSGLVL ioctl request. The notification message has the same format as reply to DEBUG_GET request. As with other ethtool notifications, netlink requests only trigger the notification if the mask is actually changed while ioctl request trigger it whenever the request results in calling the ethtool_ops handler. Signed-off-by: Michal Kubecek Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 1 + include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/debug.c | 1 + net/ethtool/ioctl.c | 2 ++ net/ethtool/netlink.c | 2 ++ 5 files changed, 7 insertions(+) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 58473c05319f..2263f885d18f 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -199,6 +199,7 @@ Kernel to userspace: ``ETHTOOL_MSG_LINKMODES_NTF`` link modes notification ``ETHTOOL_MSG_LINKSTATE_GET_REPLY`` link state info ``ETHTOOL_MSG_DEBUG_GET_REPLY`` debugging settings + ``ETHTOOL_MSG_DEBUG_NTF`` debugging settings notification ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 8f0b6fd277d5..67a06b94bf28 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -38,6 +38,7 @@ enum { ETHTOOL_MSG_LINKMODES_NTF, ETHTOOL_MSG_LINKSTATE_GET_REPLY, ETHTOOL_MSG_DEBUG_GET_REPLY, + ETHTOOL_MSG_DEBUG_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, diff --git a/net/ethtool/debug.c b/net/ethtool/debug.c index 6fc3ef8113c0..aaef4843e6ba 100644 --- a/net/ethtool/debug.c +++ b/net/ethtool/debug.c @@ -123,6 +123,7 @@ int ethnl_set_debug(struct sk_buff *skb, struct genl_info *info) goto out_ops; dev->ethtool_ops->set_msglevel(dev, msg_mask); + ethtool_notify(dev, ETHTOOL_MSG_DEBUG_NTF, NULL); out_ops: ethnl_ops_complete(dev); diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 182bffbffa78..46e0b31782fc 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -2545,6 +2545,8 @@ int dev_ethtool(struct net *net, struct ifreq *ifr) case ETHTOOL_SMSGLVL: rc = ethtool_set_value_void(dev, useraddr, dev->ethtool_ops->set_msglevel); + if (!rc) + ethtool_notify(dev, ETHTOOL_MSG_DEBUG_NTF, NULL); break; case ETHTOOL_GEEE: rc = ethtool_get_eee(dev, useraddr); diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index bcdc42e0bd14..9a0a6c2f8dbb 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -524,6 +524,7 @@ static const struct ethnl_request_ops * ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { [ETHTOOL_MSG_LINKINFO_NTF] = ðnl_linkinfo_request_ops, [ETHTOOL_MSG_LINKMODES_NTF] = ðnl_linkmodes_request_ops, + [ETHTOOL_MSG_DEBUG_NTF] = ðnl_debug_request_ops, }; /* default notification handler */ @@ -607,6 +608,7 @@ typedef void (*ethnl_notify_handler_t)(struct net_device *dev, unsigned int cmd, static const ethnl_notify_handler_t ethnl_notify_handlers[] = { [ETHTOOL_MSG_LINKINFO_NTF] = ethnl_default_notify, [ETHTOOL_MSG_LINKMODES_NTF] = ethnl_default_notify, + [ETHTOOL_MSG_DEBUG_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) -- cgit v1.2.3-71-gd317 From 51ea22b04ea0c210f9ce87b8a600965dbe476bc2 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:13 +0100 Subject: ethtool: provide WoL settings with WOL_GET request Implement WOL_GET request to get wake-on-lan settings for a device, traditionally available via ETHTOOL_GWOL ioctl request. As part of the implementation, provide symbolic names for wake-on-line modes as ETH_SS_WOL_MODES string set. Signed-off-by: Michal Kubecek Reviewed-by: Andrew Lunn Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 30 ++++++++- include/uapi/linux/ethtool.h | 4 ++ include/uapi/linux/ethtool_netlink.h | 15 +++++ net/ethtool/Makefile | 2 +- net/ethtool/common.c | 12 ++++ net/ethtool/common.h | 1 + net/ethtool/netlink.c | 9 +++ net/ethtool/netlink.h | 1 + net/ethtool/strset.c | 5 ++ net/ethtool/wol.c | 99 ++++++++++++++++++++++++++++ 10 files changed, 176 insertions(+), 2 deletions(-) create mode 100644 net/ethtool/wol.c (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 2263f885d18f..5fd85e3ea96e 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -187,6 +187,7 @@ Userspace to kernel: ``ETHTOOL_MSG_LINKSTATE_GET`` get link state ``ETHTOOL_MSG_DEBUG_GET`` get debugging settings ``ETHTOOL_MSG_DEBUG_SET`` set debugging settings + ``ETHTOOL_MSG_WOL_GET`` get wake-on-lan settings ===================================== ================================ Kernel to userspace: @@ -200,6 +201,7 @@ Kernel to userspace: ``ETHTOOL_MSG_LINKSTATE_GET_REPLY`` link state info ``ETHTOOL_MSG_DEBUG_GET_REPLY`` debugging settings ``ETHTOOL_MSG_DEBUG_NTF`` debugging settings notification + ``ETHTOOL_MSG_WOL_GET_REPLY`` wake-on-lan settings ===================================== ================================ ``GET`` requests are sent by userspace applications to retrieve device @@ -474,6 +476,32 @@ Request contents: enabled debugging message types for the device. +WOL_GET +======= + +Query device wake-on-lan settings. Unlike most "GET" type requests, +``ETHTOOL_MSG_WOL_GET`` requires (netns) ``CAP_NET_ADMIN`` privileges as it +(potentially) provides SecureOn(tm) password which is confidential. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_WOL_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_WOL_HEADER`` nested reply header + ``ETHTOOL_A_WOL_MODES`` bitset mask of enabled WoL modes + ``ETHTOOL_A_WOL_SOPASS`` binary SecureOn(tm) password + ==================================== ====== ========================== + +In reply, ``ETHTOOL_A_WOL_MODES`` mask consists of modes supported by the +device, value of modes which are enabled. ``ETHTOOL_A_WOL_SOPASS`` is only +included in reply if ``WAKE_MAGICSECURE`` mode is supported. + + Request translation =================== @@ -490,7 +518,7 @@ have their netlink replacement yet. ``ETHTOOL_MSG_LINKMODES_SET`` ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a - ``ETHTOOL_GWOL`` n/a + ``ETHTOOL_GWOL`` ``ETHTOOL_MSG_WOL_GET`` ``ETHTOOL_SWOL`` n/a ``ETHTOOL_GMSGLVL`` ``ETHTOOL_MSG_DEBUG_GET`` ``ETHTOOL_SMSGLVL`` ``ETHTOOL_MSG_DEBUG_SET`` diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h index 456fb2aa0fad..4295ebfa2f91 100644 --- a/include/uapi/linux/ethtool.h +++ b/include/uapi/linux/ethtool.h @@ -595,6 +595,7 @@ struct ethtool_pauseparam { * @ETH_SS_PHY_TUNABLES: PHY tunable names * @ETH_SS_LINK_MODES: link mode names * @ETH_SS_MSG_CLASSES: debug message class names + * @ETH_SS_WOL_MODES: wake-on-lan modes */ enum ethtool_stringset { ETH_SS_TEST = 0, @@ -608,6 +609,7 @@ enum ethtool_stringset { ETH_SS_PHY_TUNABLES, ETH_SS_LINK_MODES, ETH_SS_MSG_CLASSES, + ETH_SS_WOL_MODES, /* add new constants above here */ ETH_SS_COUNT @@ -1695,6 +1697,8 @@ static inline int ethtool_validate_duplex(__u8 duplex) #define WAKE_MAGICSECURE (1 << 6) /* only meaningful if WAKE_MAGIC */ #define WAKE_FILTER (1 << 7) +#define WOL_MODE_COUNT 8 + /* L2-L4 network traffic flow types */ #define TCP_V4_FLOW 0x01 /* hash or spec (tcp_ip4_spec) */ #define UDP_V4_FLOW 0x02 /* hash or spec (udp_ip4_spec) */ diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 67a06b94bf28..dcc5c32dc018 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -22,6 +22,7 @@ enum { ETHTOOL_MSG_LINKSTATE_GET, ETHTOOL_MSG_DEBUG_GET, ETHTOOL_MSG_DEBUG_SET, + ETHTOOL_MSG_WOL_GET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -39,6 +40,7 @@ enum { ETHTOOL_MSG_LINKSTATE_GET_REPLY, ETHTOOL_MSG_DEBUG_GET_REPLY, ETHTOOL_MSG_DEBUG_NTF, + ETHTOOL_MSG_WOL_GET_REPLY, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -211,6 +213,19 @@ enum { ETHTOOL_A_DEBUG_MAX = __ETHTOOL_A_DEBUG_CNT - 1 }; +/* WOL */ + +enum { + ETHTOOL_A_WOL_UNSPEC, + ETHTOOL_A_WOL_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_WOL_MODES, /* bitset */ + ETHTOOL_A_WOL_SOPASS, /* binary */ + + /* add new constants above here */ + __ETHTOOL_A_WOL_CNT, + ETHTOOL_A_WOL_MAX = __ETHTOOL_A_WOL_CNT - 1 +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index c120c820a4f5..424545a4aaec 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -5,4 +5,4 @@ obj-y += ioctl.o common.o obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ - linkstate.o debug.o + linkstate.o debug.o wol.o diff --git a/net/ethtool/common.c b/net/ethtool/common.c index aa1183a65a76..636ec6d5110e 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -190,6 +190,18 @@ const char netif_msg_class_names[][ETH_GSTRING_LEN] = { }; static_assert(ARRAY_SIZE(netif_msg_class_names) == NETIF_MSG_CLASS_COUNT); +const char wol_mode_names[][ETH_GSTRING_LEN] = { + [const_ilog2(WAKE_PHY)] = "phy", + [const_ilog2(WAKE_UCAST)] = "ucast", + [const_ilog2(WAKE_MCAST)] = "mcast", + [const_ilog2(WAKE_BCAST)] = "bcast", + [const_ilog2(WAKE_ARP)] = "arp", + [const_ilog2(WAKE_MAGIC)] = "magic", + [const_ilog2(WAKE_MAGICSECURE)] = "magicsecure", + [const_ilog2(WAKE_FILTER)] = "filter", +}; +static_assert(ARRAY_SIZE(wol_mode_names) == WOL_MODE_COUNT); + /* return false if legacy contained non-0 deprecated fields * maxtxpkt/maxrxpkt. rest of ksettings always updated */ diff --git a/net/ethtool/common.h b/net/ethtool/common.h index 064c5c3aa990..40ba74e0b9bb 100644 --- a/net/ethtool/common.h +++ b/net/ethtool/common.h @@ -20,6 +20,7 @@ extern const char phy_tunable_strings[__ETHTOOL_PHY_TUNABLE_COUNT][ETH_GSTRING_LEN]; extern const char link_mode_names[][ETH_GSTRING_LEN]; extern const char netif_msg_class_names[][ETH_GSTRING_LEN]; +extern const char wol_mode_names[][ETH_GSTRING_LEN]; int __ethtool_get_link(struct net_device *dev); diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 9a0a6c2f8dbb..eeb6d8594e1b 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -214,6 +214,7 @@ ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_LINKMODES_GET] = ðnl_linkmodes_request_ops, [ETHTOOL_MSG_LINKSTATE_GET] = ðnl_linkstate_request_ops, [ETHTOOL_MSG_DEBUG_GET] = ðnl_debug_request_ops, + [ETHTOOL_MSG_WOL_GET] = ðnl_wol_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -679,6 +680,14 @@ static const struct genl_ops ethtool_genl_ops[] = { .flags = GENL_UNS_ADMIN_PERM, .doit = ethnl_set_debug, }, + { + .cmd = ETHTOOL_MSG_WOL_GET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 772723e536e8..9fcd6f87b396 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -335,6 +335,7 @@ extern const struct ethnl_request_ops ethnl_linkinfo_request_ops; extern const struct ethnl_request_ops ethnl_linkmodes_request_ops; extern const struct ethnl_request_ops ethnl_linkstate_request_ops; extern const struct ethnl_request_ops ethnl_debug_request_ops; +extern const struct ethnl_request_ops ethnl_wol_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c index 7a45c25355b8..8e5911887b4c 100644 --- a/net/ethtool/strset.c +++ b/net/ethtool/strset.c @@ -55,6 +55,11 @@ static const struct strset_info info_template[] = { .count = NETIF_MSG_CLASS_COUNT, .strings = netif_msg_class_names, }, + [ETH_SS_WOL_MODES] = { + .per_dev = false, + .count = WOL_MODE_COUNT, + .strings = wol_mode_names, + }, }; struct strset_req_info { diff --git a/net/ethtool/wol.c b/net/ethtool/wol.c new file mode 100644 index 000000000000..7c9a1ef622ce --- /dev/null +++ b/net/ethtool/wol.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" +#include "bitset.h" + +struct wol_req_info { + struct ethnl_req_info base; +}; + +struct wol_reply_data { + struct ethnl_reply_data base; + struct ethtool_wolinfo wol; + bool show_sopass; +}; + +#define WOL_REPDATA(__reply_base) \ + container_of(__reply_base, struct wol_reply_data, base) + +static const struct nla_policy +wol_get_policy[ETHTOOL_A_WOL_MAX + 1] = { + [ETHTOOL_A_WOL_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_WOL_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_WOL_MODES] = { .type = NLA_REJECT }, + [ETHTOOL_A_WOL_SOPASS] = { .type = NLA_REJECT }, +}; + +static int wol_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct wol_reply_data *data = WOL_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + if (!dev->ethtool_ops->get_wol) + return -EOPNOTSUPP; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + dev->ethtool_ops->get_wol(dev, &data->wol); + ethnl_ops_complete(dev); + data->show_sopass = data->wol.supported & WAKE_MAGICSECURE; + + return 0; +} + +static int wol_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + const struct wol_reply_data *data = WOL_REPDATA(reply_base); + int len; + + len = ethnl_bitset32_size(&data->wol.wolopts, &data->wol.supported, + WOL_MODE_COUNT, wol_mode_names, compact); + if (len < 0) + return len; + if (data->show_sopass) + len += nla_total_size(sizeof(data->wol.sopass)); + + return len; +} + +static int wol_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + bool compact = req_base->flags & ETHTOOL_FLAG_COMPACT_BITSETS; + const struct wol_reply_data *data = WOL_REPDATA(reply_base); + int ret; + + ret = ethnl_put_bitset32(skb, ETHTOOL_A_WOL_MODES, &data->wol.wolopts, + &data->wol.supported, WOL_MODE_COUNT, + wol_mode_names, compact); + if (ret < 0) + return ret; + if (data->show_sopass && + nla_put(skb, ETHTOOL_A_WOL_SOPASS, sizeof(data->wol.sopass), + data->wol.sopass)) + return -EMSGSIZE; + + return 0; +} + +const struct ethnl_request_ops ethnl_wol_request_ops = { + .request_cmd = ETHTOOL_MSG_WOL_GET, + .reply_cmd = ETHTOOL_MSG_WOL_GET_REPLY, + .hdr_attr = ETHTOOL_A_WOL_HEADER, + .max_attr = ETHTOOL_A_WOL_MAX, + .req_info_size = sizeof(struct wol_req_info), + .reply_data_size = sizeof(struct wol_reply_data), + .request_policy = wol_get_policy, + + .prepare_data = wol_prepare_data, + .reply_size = wol_reply_size, + .fill_reply = wol_fill_reply, +}; -- cgit v1.2.3-71-gd317 From 8d425b19b305a77cb1ba67b84e7c1ca547de032f Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:16 +0100 Subject: ethtool: set wake-on-lan settings with WOL_SET request Implement WOL_SET netlink request to set wake-on-lan settings. This is equivalent to ETHTOOL_SWOL ioctl request. Signed-off-by: Michal Kubecek Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 20 +++++++- include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/netlink.c | 5 ++ net/ethtool/netlink.h | 1 + net/ethtool/wol.c | 76 ++++++++++++++++++++++++++++ 5 files changed, 102 insertions(+), 1 deletion(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 5fd85e3ea96e..f16f74bbb546 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -188,6 +188,7 @@ Userspace to kernel: ``ETHTOOL_MSG_DEBUG_GET`` get debugging settings ``ETHTOOL_MSG_DEBUG_SET`` set debugging settings ``ETHTOOL_MSG_WOL_GET`` get wake-on-lan settings + ``ETHTOOL_MSG_WOL_SET`` set wake-on-lan settings ===================================== ================================ Kernel to userspace: @@ -502,6 +503,23 @@ device, value of modes which are enabled. ``ETHTOOL_A_WOL_SOPASS`` is only included in reply if ``WAKE_MAGICSECURE`` mode is supported. +WOL_SET +======= + +Set or update wake-on-lan settings. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_WOL_HEADER`` nested request header + ``ETHTOOL_A_WOL_MODES`` bitset enabled WoL modes + ``ETHTOOL_A_WOL_SOPASS`` binary SecureOn(tm) password + ==================================== ====== ========================== + +``ETHTOOL_A_WOL_SOPASS`` is only allowed for devices supporting +``WAKE_MAGICSECURE`` mode. + + Request translation =================== @@ -519,7 +537,7 @@ have their netlink replacement yet. ``ETHTOOL_GDRVINFO`` n/a ``ETHTOOL_GREGS`` n/a ``ETHTOOL_GWOL`` ``ETHTOOL_MSG_WOL_GET`` - ``ETHTOOL_SWOL`` n/a + ``ETHTOOL_SWOL`` ``ETHTOOL_MSG_WOL_SET`` ``ETHTOOL_GMSGLVL`` ``ETHTOOL_MSG_DEBUG_GET`` ``ETHTOOL_SMSGLVL`` ``ETHTOOL_MSG_DEBUG_SET`` ``ETHTOOL_NWAY_RST`` n/a diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index dcc5c32dc018..59de35695521 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -23,6 +23,7 @@ enum { ETHTOOL_MSG_DEBUG_GET, ETHTOOL_MSG_DEBUG_SET, ETHTOOL_MSG_WOL_GET, + ETHTOOL_MSG_WOL_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index eeb6d8594e1b..2c375f9095fe 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -688,6 +688,11 @@ static const struct genl_ops ethtool_genl_ops[] = { .dumpit = ethnl_default_dumpit, .done = ethnl_default_done, }, + { + .cmd = ETHTOOL_MSG_WOL_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_wol, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 9fcd6f87b396..60efd87686ad 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -340,5 +340,6 @@ extern const struct ethnl_request_ops ethnl_wol_request_ops; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); int ethnl_set_debug(struct sk_buff *skb, struct genl_info *info); +int ethnl_set_wol(struct sk_buff *skb, struct genl_info *info); #endif /* _NET_ETHTOOL_NETLINK_H */ diff --git a/net/ethtool/wol.c b/net/ethtool/wol.c index 7c9a1ef622ce..a2724378fac4 100644 --- a/net/ethtool/wol.c +++ b/net/ethtool/wol.c @@ -97,3 +97,79 @@ const struct ethnl_request_ops ethnl_wol_request_ops = { .reply_size = wol_reply_size, .fill_reply = wol_fill_reply, }; + +/* WOL_SET */ + +static const struct nla_policy +wol_set_policy[ETHTOOL_A_WOL_MAX + 1] = { + [ETHTOOL_A_WOL_UNSPEC] = { .type = NLA_REJECT }, + [ETHTOOL_A_WOL_HEADER] = { .type = NLA_NESTED }, + [ETHTOOL_A_WOL_MODES] = { .type = NLA_NESTED }, + [ETHTOOL_A_WOL_SOPASS] = { .type = NLA_BINARY, + .len = SOPASS_MAX }, +}; + +int ethnl_set_wol(struct sk_buff *skb, struct genl_info *info) +{ + struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; + struct nlattr *tb[ETHTOOL_A_WOL_MAX + 1]; + struct ethnl_req_info req_info = {}; + struct net_device *dev; + bool mod = false; + int ret; + + ret = nlmsg_parse(info->nlhdr, GENL_HDRLEN, tb, ETHTOOL_A_WOL_MAX, + wol_set_policy, info->extack); + if (ret < 0) + return ret; + ret = ethnl_parse_header(&req_info, tb[ETHTOOL_A_WOL_HEADER], + genl_info_net(info), info->extack, true); + if (ret < 0) + return ret; + dev = req_info.dev; + if (!dev->ethtool_ops->get_wol || !dev->ethtool_ops->set_wol) + return -EOPNOTSUPP; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + dev->ethtool_ops->get_wol(dev, &wol); + ret = ethnl_update_bitset32(&wol.wolopts, WOL_MODE_COUNT, + tb[ETHTOOL_A_WOL_MODES], wol_mode_names, + info->extack, &mod); + if (ret < 0) + goto out_ops; + if (wol.wolopts & ~wol.supported) { + NL_SET_ERR_MSG_ATTR(info->extack, tb[ETHTOOL_A_WOL_MODES], + "cannot enable unsupported WoL mode"); + ret = -EINVAL; + goto out_ops; + } + if (tb[ETHTOOL_A_WOL_SOPASS]) { + if (!(wol.supported & WAKE_MAGICSECURE)) { + NL_SET_ERR_MSG_ATTR(info->extack, + tb[ETHTOOL_A_WOL_SOPASS], + "magicsecure not supported, cannot set password"); + ret = -EINVAL; + goto out_ops; + } + ethnl_update_binary(wol.sopass, sizeof(wol.sopass), + tb[ETHTOOL_A_WOL_SOPASS], &mod); + } + + if (!mod) + goto out_ops; + ret = dev->ethtool_ops->set_wol(dev, &wol); + if (ret) + goto out_ops; + dev->wol_enabled = !!wol.wolopts; + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); + dev_put(dev); + return ret; +} -- cgit v1.2.3-71-gd317 From 67bffa79231f15ba372007972563cc8aa5e48bfa Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Sun, 26 Jan 2020 23:11:19 +0100 Subject: ethtool: add WOL_NTF notification Send ETHTOOL_MSG_WOL_NTF notification whenever wake-on-lan settings of a device are modified using ETHTOOL_MSG_WOL_SET netlink message or ETHTOOL_SWOL ioctl request. As notifications can be received by anyone, do not include SecureOn(tm) password in notification messages. Signed-off-by: Michal Kubecek Signed-off-by: David S. Miller --- Documentation/networking/ethtool-netlink.rst | 5 +++-- include/uapi/linux/ethtool_netlink.h | 1 + net/ethtool/ioctl.c | 1 + net/ethtool/netlink.c | 2 ++ net/ethtool/wol.c | 4 +++- 5 files changed, 10 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index f16f74bbb546..f1f868479ceb 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -193,7 +193,7 @@ Userspace to kernel: Kernel to userspace: - ===================================== ================================ + ===================================== ================================= ``ETHTOOL_MSG_STRSET_GET_REPLY`` string set contents ``ETHTOOL_MSG_LINKINFO_GET_REPLY`` link settings ``ETHTOOL_MSG_LINKINFO_NTF`` link settings notification @@ -203,7 +203,8 @@ Kernel to userspace: ``ETHTOOL_MSG_DEBUG_GET_REPLY`` debugging settings ``ETHTOOL_MSG_DEBUG_NTF`` debugging settings notification ``ETHTOOL_MSG_WOL_GET_REPLY`` wake-on-lan settings - ===================================== ================================ + ``ETHTOOL_MSG_WOL_NTF`` wake-on-lan settings notification + ===================================== ================================= ``GET`` requests are sent by userspace applications to retrieve device information. They usually do not contain any message specific attributes. diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index 59de35695521..7e0b460f872c 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -42,6 +42,7 @@ enum { ETHTOOL_MSG_DEBUG_GET_REPLY, ETHTOOL_MSG_DEBUG_NTF, ETHTOOL_MSG_WOL_GET_REPLY, + ETHTOOL_MSG_WOL_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c index 46e0b31782fc..b88dd14e41c6 100644 --- a/net/ethtool/ioctl.c +++ b/net/ethtool/ioctl.c @@ -1316,6 +1316,7 @@ static int ethtool_set_wol(struct net_device *dev, char __user *useraddr) return ret; dev->wol_enabled = !!wol.wolopts; + ethtool_notify(dev, ETHTOOL_MSG_WOL_NTF, NULL); return 0; } diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 2c375f9095fe..180c194fab07 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -526,6 +526,7 @@ ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { [ETHTOOL_MSG_LINKINFO_NTF] = ðnl_linkinfo_request_ops, [ETHTOOL_MSG_LINKMODES_NTF] = ðnl_linkmodes_request_ops, [ETHTOOL_MSG_DEBUG_NTF] = ðnl_debug_request_ops, + [ETHTOOL_MSG_WOL_NTF] = ðnl_wol_request_ops, }; /* default notification handler */ @@ -610,6 +611,7 @@ static const ethnl_notify_handler_t ethnl_notify_handlers[] = { [ETHTOOL_MSG_LINKINFO_NTF] = ethnl_default_notify, [ETHTOOL_MSG_LINKMODES_NTF] = ethnl_default_notify, [ETHTOOL_MSG_DEBUG_NTF] = ethnl_default_notify, + [ETHTOOL_MSG_WOL_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) diff --git a/net/ethtool/wol.c b/net/ethtool/wol.c index a2724378fac4..e1b8a65b64c4 100644 --- a/net/ethtool/wol.c +++ b/net/ethtool/wol.c @@ -41,7 +41,8 @@ static int wol_prepare_data(const struct ethnl_req_info *req_base, return ret; dev->ethtool_ops->get_wol(dev, &data->wol); ethnl_ops_complete(dev); - data->show_sopass = data->wol.supported & WAKE_MAGICSECURE; + /* do not include password in notifications */ + data->show_sopass = info && (data->wol.supported & WAKE_MAGICSECURE); return 0; } @@ -165,6 +166,7 @@ int ethnl_set_wol(struct sk_buff *skb, struct genl_info *info) if (ret) goto out_ops; dev->wol_enabled = !!wol.wolopts; + ethtool_notify(dev, ETHTOOL_MSG_WOL_NTF, NULL); out_ops: ethnl_ops_complete(dev); -- cgit v1.2.3-71-gd317 From 41c0d917d11edf9a34461f56e14f067aadb36101 Mon Sep 17 00:00:00 2001 From: Vasundhara Volam Date: Mon, 27 Jan 2020 04:56:25 -0500 Subject: devlink: add macro for "fw.roce" Add definition and documentation for the new generic info "fw.roce". v2: Remove board.nvm_cfg since fw.psid is similar. Cc: Jiri Pirko Cc: Jakub Kicinski Signed-off-by: Vasundhara Volam Signed-off-by: Michael Chan Signed-off-by: David S. Miller --- Documentation/networking/devlink/devlink-info.rst | 6 ++++++ include/net/devlink.h | 2 ++ 2 files changed, 8 insertions(+) (limited to 'include') diff --git a/Documentation/networking/devlink/devlink-info.rst b/Documentation/networking/devlink/devlink-info.rst index 0385f15028b1..70981dd1b981 100644 --- a/Documentation/networking/devlink/devlink-info.rst +++ b/Documentation/networking/devlink/devlink-info.rst @@ -92,3 +92,9 @@ fw.psid ------- Unique identifier of the firmware parameter set. + +fw.roce +------- + +RoCE firmware version which is responsible for handling roce +management. diff --git a/include/net/devlink.h b/include/net/devlink.h index 5e46c24bb6e6..ce5cea428fdc 100644 --- a/include/net/devlink.h +++ b/include/net/devlink.h @@ -487,6 +487,8 @@ enum devlink_param_generic_id { #define DEVLINK_INFO_VERSION_GENERIC_FW_NCSI "fw.ncsi" /* FW parameter set id */ #define DEVLINK_INFO_VERSION_GENERIC_FW_PSID "fw.psid" +/* RoCE FW version */ +#define DEVLINK_INFO_VERSION_GENERIC_FW_ROCE "fw.roce" struct devlink_region; struct devlink_info_req; -- cgit v1.2.3-71-gd317 From 2924e0699963b839f88f8c4e855929ea49185870 Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:07 +0200 Subject: qed: FW 8.42.2.0 Internal ram offsets modifications IRO stands for internal RAM offsets. Updating the FW binary produces different iro offsets. This file contains the different values, and a new representation of the values. Update the FW version Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed.h | 4 +- drivers/net/ethernet/qlogic/qed/qed_hsi.h | 419 ++++++++++++++++-------------- include/linux/qed/common_hsi.h | 4 +- 3 files changed, 234 insertions(+), 193 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index 89fe091c958d..8ec46bf409c2 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -796,8 +796,8 @@ struct qed_dev { u8 cache_shift; /* Init */ - const struct iro *iro_arr; -#define IRO (p_hwfn->cdev->iro_arr) + const u32 *iro_arr; +#define IRO ((const struct iro *)p_hwfn->cdev->iro_arr) /* HW functions */ u8 num_hwfns; diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index cf3ceb62e397..aaff7117291c 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -4282,325 +4282,366 @@ void qed_set_rdma_error_level(struct qed_hwfn *p_hwfn, (IRO[7].base + ((queue_zone_id) * IRO[7].m1)) #define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size) +/* Xstorm common PQ info */ +#define XSTORM_PQ_INFO_OFFSET(pq_id) \ + (IRO[8].base + ((pq_id) * IRO[8].m1)) +#define XSTORM_PQ_INFO_SIZE (IRO[8].size) + /* Xstorm Integration Test Data */ -#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[8].base) -#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[8].size) +#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base) +#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size) /* Ystorm Integration Test Data */ -#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base) -#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size) +#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base) +#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size) /* Pstorm Integration Test Data */ -#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base) -#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size) +#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base) +#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size) /* Tstorm Integration Test Data */ -#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base) -#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size) +#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base) +#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size) /* Mstorm Integration Test Data */ -#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base) -#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size) +#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base) +#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[13].size) /* Ustorm Integration Test Data */ -#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base) -#define USTORM_INTEG_TEST_DATA_SIZE (IRO[13].size) +#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[14].base) +#define USTORM_INTEG_TEST_DATA_SIZE (IRO[14].size) + +/* Xstorm overlay buffer host address */ +#define XSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[15].base) +#define XSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[15].size) + +/* Ystorm overlay buffer host address */ +#define YSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[16].base) +#define YSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[16].size) + +/* Pstorm overlay buffer host address */ +#define PSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[17].base) +#define PSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[17].size) + +/* Tstorm overlay buffer host address */ +#define TSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[18].base) +#define TSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[18].size) + +/* Mstorm overlay buffer host address */ +#define MSTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[19].base) +#define MSTORM_OVERLAY_BUF_ADDR_SIZE (IRO[19].size) + +/* Ustorm overlay buffer host address */ +#define USTORM_OVERLAY_BUF_ADDR_OFFSET (IRO[20].base) +#define USTORM_OVERLAY_BUF_ADDR_SIZE (IRO[20].size) /* Tstorm producers */ #define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) \ - (IRO[14].base + ((core_rx_queue_id) * IRO[14].m1)) -#define TSTORM_LL2_RX_PRODS_SIZE (IRO[14].size) + (IRO[21].base + ((core_rx_queue_id) * IRO[21].m1)) +#define TSTORM_LL2_RX_PRODS_SIZE (IRO[21].size) /* Tstorm LightL2 queue statistics */ #define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ - (IRO[15].base + ((core_rx_queue_id) * IRO[15].m1)) -#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[15].size) + (IRO[22].base + ((core_rx_queue_id) * IRO[22].m1)) +#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[22].size) /* Ustorm LiteL2 queue statistics */ #define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \ - (IRO[16].base + ((core_rx_queue_id) * IRO[16].m1)) -#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[16].size) + (IRO[23].base + ((core_rx_queue_id) * IRO[23].m1)) +#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[23].size) /* Pstorm LiteL2 queue statistics */ #define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \ - (IRO[17].base + ((core_tx_stats_id) * IRO[17].m1)) -#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[17].size) + (IRO[24].base + ((core_tx_stats_id) * IRO[24].m1)) +#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[24].size) /* Mstorm queue statistics */ #define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ - (IRO[18].base + ((stat_counter_id) * IRO[18].m1)) -#define MSTORM_QUEUE_STAT_SIZE (IRO[18].size) + (IRO[25].base + ((stat_counter_id) * IRO[25].m1)) +#define MSTORM_QUEUE_STAT_SIZE (IRO[25].size) -/* Mstorm ETH PF queues producers */ -#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) \ - (IRO[19].base + ((queue_id) * IRO[19].m1)) -#define MSTORM_ETH_PF_PRODS_SIZE (IRO[19].size) +/* TPA agregation timeout in us resolution (on ASIC) */ +#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[26].base) +#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[26].size) /* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size - * mode. + * mode */ #define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) \ - (IRO[20].base + ((vf_id) * IRO[20].m1) + ((vf_queue_id) * IRO[20].m2)) -#define MSTORM_ETH_VF_PRODS_SIZE (IRO[20].size) + (IRO[27].base + ((vf_id) * IRO[27].m1) + ((vf_queue_id) * IRO[27].m2)) +#define MSTORM_ETH_VF_PRODS_SIZE (IRO[27].size) -/* TPA agregation timeout in us resolution (on ASIC) */ -#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[21].base) -#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[21].size) +/* Mstorm ETH PF queues producers */ +#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) \ + (IRO[28].base + ((queue_id) * IRO[28].m1)) +#define MSTORM_ETH_PF_PRODS_SIZE (IRO[28].size) /* Mstorm pf statistics */ #define MSTORM_ETH_PF_STAT_OFFSET(pf_id) \ - (IRO[22].base + ((pf_id) * IRO[22].m1)) -#define MSTORM_ETH_PF_STAT_SIZE (IRO[22].size) + (IRO[29].base + ((pf_id) * IRO[29].m1)) +#define MSTORM_ETH_PF_STAT_SIZE (IRO[29].size) /* Ustorm queue statistics */ #define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ - (IRO[23].base + ((stat_counter_id) * IRO[23].m1)) -#define USTORM_QUEUE_STAT_SIZE (IRO[23].size) + (IRO[30].base + ((stat_counter_id) * IRO[30].m1)) +#define USTORM_QUEUE_STAT_SIZE (IRO[30].size) /* Ustorm pf statistics */ -#define USTORM_ETH_PF_STAT_OFFSET(pf_id)\ - (IRO[24].base + ((pf_id) * IRO[24].m1)) -#define USTORM_ETH_PF_STAT_SIZE (IRO[24].size) +#define USTORM_ETH_PF_STAT_OFFSET(pf_id) \ + (IRO[31].base + ((pf_id) * IRO[31].m1)) +#define USTORM_ETH_PF_STAT_SIZE (IRO[31].size) /* Pstorm queue statistics */ -#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ - (IRO[25].base + ((stat_counter_id) * IRO[25].m1)) -#define PSTORM_QUEUE_STAT_SIZE (IRO[25].size) +#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \ + (IRO[32].base + ((stat_counter_id) * IRO[32].m1)) +#define PSTORM_QUEUE_STAT_SIZE (IRO[32].size) /* Pstorm pf statistics */ #define PSTORM_ETH_PF_STAT_OFFSET(pf_id) \ - (IRO[26].base + ((pf_id) * IRO[26].m1)) -#define PSTORM_ETH_PF_STAT_SIZE (IRO[26].size) + (IRO[33].base + ((pf_id) * IRO[33].m1)) +#define PSTORM_ETH_PF_STAT_SIZE (IRO[33].size) /* Control frame's EthType configuration for TX control frame security */ -#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(eth_type_id) \ - (IRO[27].base + ((eth_type_id) * IRO[27].m1)) -#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[27].size) +#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(eth_type_id) \ + (IRO[34].base + ((eth_type_id) * IRO[34].m1)) +#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[34].size) /* Tstorm last parser message */ -#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[28].base) -#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[28].size) +#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[35].base) +#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[35].size) /* Tstorm Eth limit Rx rate */ -#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) \ - (IRO[29].base + ((pf_id) * IRO[29].m1)) -#define ETH_RX_RATE_LIMIT_SIZE (IRO[29].size) +#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) \ + (IRO[36].base + ((pf_id) * IRO[36].m1)) +#define ETH_RX_RATE_LIMIT_SIZE (IRO[36].size) /* RSS indirection table entry update command per PF offset in TSTORM PF BAR0. - * Use eth_tstorm_rss_update_data for update. + * Use eth_tstorm_rss_update_data for update */ #define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) \ - (IRO[30].base + ((pf_id) * IRO[30].m1)) -#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[30].size) + (IRO[37].base + ((pf_id) * IRO[37].m1)) +#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[37].size) /* Xstorm queue zone */ #define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) \ - (IRO[31].base + ((queue_id) * IRO[31].m1)) -#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[31].size) + (IRO[38].base + ((queue_id) * IRO[38].m1)) +#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[38].size) /* Ystorm cqe producer */ #define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) \ - (IRO[32].base + ((rss_id) * IRO[32].m1)) -#define YSTORM_TOE_CQ_PROD_SIZE (IRO[32].size) + (IRO[39].base + ((rss_id) * IRO[39].m1)) +#define YSTORM_TOE_CQ_PROD_SIZE (IRO[39].size) /* Ustorm cqe producer */ #define USTORM_TOE_CQ_PROD_OFFSET(rss_id) \ - (IRO[33].base + ((rss_id) * IRO[33].m1)) -#define USTORM_TOE_CQ_PROD_SIZE (IRO[33].size) + (IRO[40].base + ((rss_id) * IRO[40].m1)) +#define USTORM_TOE_CQ_PROD_SIZE (IRO[40].size) /* Ustorm grq producer */ #define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) \ - (IRO[34].base + ((pf_id) * IRO[34].m1)) -#define USTORM_TOE_GRQ_PROD_SIZE (IRO[34].size) + (IRO[41].base + ((pf_id) * IRO[41].m1)) +#define USTORM_TOE_GRQ_PROD_SIZE (IRO[41].size) /* Tstorm cmdq-cons of given command queue-id */ #define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) \ - (IRO[35].base + ((cmdq_queue_id) * IRO[35].m1)) -#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[35].size) + (IRO[42].base + ((cmdq_queue_id) * IRO[42].m1)) +#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[42].size) /* Tstorm (reflects M-Storm) bdq-external-producer of given function ID, - * BDqueue-id. + * BDqueue-id */ -#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) \ - (IRO[36].base + ((func_id) * IRO[36].m1) + ((bdq_id) * IRO[36].m2)) -#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size) +#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ + (IRO[43].base + ((storage_func_id) * IRO[43].m1) + \ + ((bdq_id) * IRO[43].m2)) +#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[43].size) /* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */ -#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) \ - (IRO[37].base + ((func_id) * IRO[37].m1) + ((bdq_id) * IRO[37].m2)) -#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[37].size) +#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(storage_func_id, bdq_id) \ + (IRO[44].base + ((storage_func_id) * IRO[44].m1) + \ + ((bdq_id) * IRO[44].m2)) +#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[44].size) /* Tstorm iSCSI RX stats */ -#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) \ - (IRO[38].base + ((pf_id) * IRO[38].m1)) -#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size) +#define TSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[45].base + ((storage_func_id) * IRO[45].m1)) +#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[45].size) /* Mstorm iSCSI RX stats */ -#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) \ - (IRO[39].base + ((pf_id) * IRO[39].m1)) -#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[39].size) +#define MSTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[46].base + ((storage_func_id) * IRO[46].m1)) +#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[46].size) /* Ustorm iSCSI RX stats */ -#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) \ - (IRO[40].base + ((pf_id) * IRO[40].m1)) -#define USTORM_ISCSI_RX_STATS_SIZE (IRO[40].size) +#define USTORM_ISCSI_RX_STATS_OFFSET(storage_func_id) \ + (IRO[47].base + ((storage_func_id) * IRO[47].m1)) +#define USTORM_ISCSI_RX_STATS_SIZE (IRO[47].size) /* Xstorm iSCSI TX stats */ -#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) \ - (IRO[41].base + ((pf_id) * IRO[41].m1)) -#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size) +#define XSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[48].base + ((storage_func_id) * IRO[48].m1)) +#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[48].size) /* Ystorm iSCSI TX stats */ -#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) \ - (IRO[42].base + ((pf_id) * IRO[42].m1)) -#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size) +#define YSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[49].base + ((storage_func_id) * IRO[49].m1)) +#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[49].size) /* Pstorm iSCSI TX stats */ -#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) \ - (IRO[43].base + ((pf_id) * IRO[43].m1)) -#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[43].size) +#define PSTORM_ISCSI_TX_STATS_OFFSET(storage_func_id) \ + (IRO[50].base + ((storage_func_id) * IRO[50].m1)) +#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[50].size) /* Tstorm FCoE RX stats */ #define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) \ - (IRO[44].base + ((pf_id) * IRO[44].m1)) -#define TSTORM_FCOE_RX_STATS_SIZE (IRO[44].size) + (IRO[51].base + ((pf_id) * IRO[51].m1)) +#define TSTORM_FCOE_RX_STATS_SIZE (IRO[51].size) /* Pstorm FCoE TX stats */ #define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) \ - (IRO[45].base + ((pf_id) * IRO[45].m1)) -#define PSTORM_FCOE_TX_STATS_SIZE (IRO[45].size) + (IRO[52].base + ((pf_id) * IRO[52].m1)) +#define PSTORM_FCOE_TX_STATS_SIZE (IRO[52].size) /* Pstorm RDMA queue statistics */ #define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \ - (IRO[46].base + ((rdma_stat_counter_id) * IRO[46].m1)) -#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size) + (IRO[53].base + ((rdma_stat_counter_id) * IRO[53].m1)) +#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[53].size) /* Tstorm RDMA queue statistics */ #define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \ - (IRO[47].base + ((rdma_stat_counter_id) * IRO[47].m1)) -#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[47].size) + (IRO[54].base + ((rdma_stat_counter_id) * IRO[54].m1)) +#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[54].size) /* Xstorm error level for assert */ #define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[48].base + ((pf_id) * IRO[48].m1)) -#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[48].size) + (IRO[55].base + ((pf_id) * IRO[55].m1)) +#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[55].size) /* Ystorm error level for assert */ #define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[49].base + ((pf_id) * IRO[49].m1)) -#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[49].size) + (IRO[56].base + ((pf_id) * IRO[56].m1)) +#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[56].size) /* Pstorm error level for assert */ #define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[50].base + ((pf_id) * IRO[50].m1)) -#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[50].size) + (IRO[57].base + ((pf_id) * IRO[57].m1)) +#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[57].size) /* Tstorm error level for assert */ #define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[51].base + ((pf_id) * IRO[51].m1)) -#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[51].size) + (IRO[58].base + ((pf_id) * IRO[58].m1)) +#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[58].size) /* Mstorm error level for assert */ #define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[52].base + ((pf_id) * IRO[52].m1)) -#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[52].size) + (IRO[59].base + ((pf_id) * IRO[59].m1)) +#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[59].size) /* Ustorm error level for assert */ #define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) \ - (IRO[53].base + ((pf_id) * IRO[53].m1)) -#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[53].size) + (IRO[60].base + ((pf_id) * IRO[60].m1)) +#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[60].size) /* Xstorm iWARP rxmit stats */ #define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) \ - (IRO[54].base + ((pf_id) * IRO[54].m1)) -#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[54].size) + (IRO[61].base + ((pf_id) * IRO[61].m1)) +#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[61].size) /* Tstorm RoCE Event Statistics */ -#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) \ - (IRO[55].base + ((roce_pf_id) * IRO[55].m1)) -#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[55].size) +#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) \ + (IRO[62].base + ((roce_pf_id) * IRO[62].m1)) +#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[62].size) /* DCQCN Received Statistics */ -#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) \ - (IRO[56].base + ((roce_pf_id) * IRO[56].m1)) -#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[56].size) +#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id)\ + (IRO[63].base + ((roce_pf_id) * IRO[63].m1)) +#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[63].size) /* RoCE Error Statistics */ -#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) \ - (IRO[57].base + ((roce_pf_id) * IRO[57].m1)) -#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[57].size) +#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) \ + (IRO[64].base + ((roce_pf_id) * IRO[64].m1)) +#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[64].size) /* DCQCN Sent Statistics */ -#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) \ - (IRO[58].base + ((roce_pf_id) * IRO[58].m1)) -#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[58].size) +#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) \ + (IRO[65].base + ((roce_pf_id) * IRO[65].m1)) +#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[65].size) /* RoCE CQEs Statistics */ -#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) \ - (IRO[59].base + ((roce_pf_id) * IRO[59].m1)) -#define USTORM_ROCE_CQE_STATS_SIZE (IRO[59].size) - -static const struct iro iro_arr[60] = { - {0x0, 0x0, 0x0, 0x0, 0x8}, - {0x4cb8, 0x88, 0x0, 0x0, 0x88}, - {0x6530, 0x20, 0x0, 0x0, 0x20}, - {0xb00, 0x8, 0x0, 0x0, 0x4}, - {0xa80, 0x8, 0x0, 0x0, 0x4}, - {0x0, 0x8, 0x0, 0x0, 0x2}, - {0x80, 0x8, 0x0, 0x0, 0x4}, - {0x84, 0x8, 0x0, 0x0, 0x2}, - {0x4c48, 0x0, 0x0, 0x0, 0x78}, - {0x3e38, 0x0, 0x0, 0x0, 0x78}, - {0x3ef8, 0x0, 0x0, 0x0, 0x78}, - {0x4c40, 0x0, 0x0, 0x0, 0x78}, - {0x4998, 0x0, 0x0, 0x0, 0x78}, - {0x7f50, 0x0, 0x0, 0x0, 0x78}, - {0xa28, 0x8, 0x0, 0x0, 0x8}, - {0x6210, 0x10, 0x0, 0x0, 0x10}, - {0xb820, 0x30, 0x0, 0x0, 0x30}, - {0xa990, 0x30, 0x0, 0x0, 0x30}, - {0x4b68, 0x80, 0x0, 0x0, 0x40}, - {0x1f8, 0x4, 0x0, 0x0, 0x4}, - {0x53a8, 0x80, 0x4, 0x0, 0x4}, - {0xc7d0, 0x0, 0x0, 0x0, 0x4}, - {0x4ba8, 0x80, 0x0, 0x0, 0x20}, - {0x8158, 0x40, 0x0, 0x0, 0x30}, - {0xe770, 0x60, 0x0, 0x0, 0x60}, - {0x4090, 0x80, 0x0, 0x0, 0x38}, - {0xfea8, 0x78, 0x0, 0x0, 0x78}, - {0x1f8, 0x4, 0x0, 0x0, 0x4}, - {0xaf20, 0x0, 0x0, 0x0, 0xf0}, - {0xb010, 0x8, 0x0, 0x0, 0x8}, - {0xc00, 0x8, 0x0, 0x0, 0x8}, - {0x1f8, 0x8, 0x0, 0x0, 0x8}, - {0xac0, 0x8, 0x0, 0x0, 0x8}, - {0x2578, 0x8, 0x0, 0x0, 0x8}, - {0x24f8, 0x8, 0x0, 0x0, 0x8}, - {0x0, 0x8, 0x0, 0x0, 0x8}, - {0x400, 0x18, 0x8, 0x0, 0x8}, - {0xb78, 0x18, 0x8, 0x0, 0x2}, - {0xd898, 0x50, 0x0, 0x0, 0x3c}, - {0x12908, 0x18, 0x0, 0x0, 0x10}, - {0x11aa8, 0x40, 0x0, 0x0, 0x18}, - {0xa588, 0x50, 0x0, 0x0, 0x20}, - {0x8f00, 0x40, 0x0, 0x0, 0x28}, - {0x10e30, 0x18, 0x0, 0x0, 0x10}, - {0xde48, 0x48, 0x0, 0x0, 0x38}, - {0x11298, 0x20, 0x0, 0x0, 0x20}, - {0x40c8, 0x80, 0x0, 0x0, 0x10}, - {0x5048, 0x10, 0x0, 0x0, 0x10}, - {0xc748, 0x8, 0x0, 0x0, 0x1}, - {0xa928, 0x8, 0x0, 0x0, 0x1}, - {0x11a30, 0x8, 0x0, 0x0, 0x1}, - {0xf030, 0x8, 0x0, 0x0, 0x1}, - {0x13028, 0x8, 0x0, 0x0, 0x1}, - {0x12c58, 0x8, 0x0, 0x0, 0x1}, - {0xc9b8, 0x30, 0x0, 0x0, 0x10}, - {0xed90, 0x28, 0x0, 0x0, 0x28}, - {0xad20, 0x18, 0x0, 0x0, 0x18}, - {0xaea0, 0x8, 0x0, 0x0, 0x8}, - {0x13c38, 0x8, 0x0, 0x0, 0x8}, - {0x13c50, 0x18, 0x0, 0x0, 0x18}, +#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) \ + (IRO[66].base + ((roce_pf_id) * IRO[66].m1)) +#define USTORM_ROCE_CQE_STATS_SIZE (IRO[66].size) + +/* IRO Array */ +static const u32 iro_arr[] = { + 0x00000000, 0x00000000, 0x00080000, + 0x00003288, 0x00000088, 0x00880000, + 0x000058e8, 0x00000020, 0x00200000, + 0x00000b00, 0x00000008, 0x00040000, + 0x00000a80, 0x00000008, 0x00040000, + 0x00000000, 0x00000008, 0x00020000, + 0x00000080, 0x00000008, 0x00040000, + 0x00000084, 0x00000008, 0x00020000, + 0x00005718, 0x00000004, 0x00040000, + 0x00004dd0, 0x00000000, 0x00780000, + 0x00003e40, 0x00000000, 0x00780000, + 0x00004480, 0x00000000, 0x00780000, + 0x00003210, 0x00000000, 0x00780000, + 0x00003b50, 0x00000000, 0x00780000, + 0x00007f58, 0x00000000, 0x00780000, + 0x00005f58, 0x00000000, 0x00080000, + 0x00007100, 0x00000000, 0x00080000, + 0x0000aea0, 0x00000000, 0x00080000, + 0x00004398, 0x00000000, 0x00080000, + 0x0000a5a0, 0x00000000, 0x00080000, + 0x0000bde8, 0x00000000, 0x00080000, + 0x00000020, 0x00000004, 0x00040000, + 0x000056c8, 0x00000010, 0x00100000, + 0x0000c210, 0x00000030, 0x00300000, + 0x0000b088, 0x00000038, 0x00380000, + 0x00003d20, 0x00000080, 0x00400000, + 0x0000bf60, 0x00000000, 0x00040000, + 0x00004560, 0x00040080, 0x00040000, + 0x000001f8, 0x00000004, 0x00040000, + 0x00003d60, 0x00000080, 0x00200000, + 0x00008960, 0x00000040, 0x00300000, + 0x0000e840, 0x00000060, 0x00600000, + 0x00004618, 0x00000080, 0x00380000, + 0x00010738, 0x000000c0, 0x00c00000, + 0x000001f8, 0x00000002, 0x00020000, + 0x0000a2a0, 0x00000000, 0x01080000, + 0x0000a3a8, 0x00000008, 0x00080000, + 0x000001c0, 0x00000008, 0x00080000, + 0x000001f8, 0x00000008, 0x00080000, + 0x00000ac0, 0x00000008, 0x00080000, + 0x00002578, 0x00000008, 0x00080000, + 0x000024f8, 0x00000008, 0x00080000, + 0x00000280, 0x00000008, 0x00080000, + 0x00000680, 0x00080018, 0x00080000, + 0x00000b78, 0x00080018, 0x00020000, + 0x0000c640, 0x00000050, 0x003c0000, + 0x00012038, 0x00000018, 0x00100000, + 0x00011b00, 0x00000040, 0x00180000, + 0x000095d0, 0x00000050, 0x00200000, + 0x00008b10, 0x00000040, 0x00280000, + 0x00011640, 0x00000018, 0x00100000, + 0x0000c828, 0x00000048, 0x00380000, + 0x00011710, 0x00000020, 0x00200000, + 0x00004650, 0x00000080, 0x00100000, + 0x00003618, 0x00000010, 0x00100000, + 0x0000a968, 0x00000008, 0x00010000, + 0x000097a0, 0x00000008, 0x00010000, + 0x00011990, 0x00000008, 0x00010000, + 0x0000f018, 0x00000008, 0x00010000, + 0x00012628, 0x00000008, 0x00010000, + 0x00011da8, 0x00000008, 0x00010000, + 0x0000aa78, 0x00000030, 0x00100000, + 0x0000d768, 0x00000028, 0x00280000, + 0x00009a58, 0x00000018, 0x00180000, + 0x00009bd8, 0x00000008, 0x00080000, + 0x00013a18, 0x00000008, 0x00080000, + 0x000126e8, 0x00000018, 0x00180000, + 0x0000e608, 0x00500288, 0x00100000, + 0x00012970, 0x00000138, 0x00280000, }; /* Runtime array offsets */ diff --git a/include/linux/qed/common_hsi.h b/include/linux/qed/common_hsi.h index 03f59a28fefd..3f437e826a4c 100644 --- a/include/linux/qed/common_hsi.h +++ b/include/linux/qed/common_hsi.h @@ -109,8 +109,8 @@ #define MAX_NUM_LL2_TX_STATS_COUNTERS 48 #define FW_MAJOR_VERSION 8 -#define FW_MINOR_VERSION 37 -#define FW_REVISION_VERSION 7 +#define FW_MINOR_VERSION 42 +#define FW_REVISION_VERSION 2 #define FW_ENGINEERING_VERSION 0 /***********************/ -- cgit v1.2.3-71-gd317 From 997af5df230e3288ec1f5b332955f9be643e450b Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:12 +0200 Subject: qed: FW 8.42.2.0 Additional ll2 type LL2 queues were a limited resource due to FW constraints. This FW introduced a new resource which is a context based ll2 queue (memory on host). The additional ll2 queues are required for RDMA SRIOV. The code refers to the previous ll2 queues as ram-based or legacy, and the new queues as ctx-based. This change decreased the "legacy" ram-based queues therefore the first ll2 queue used for iWARP was converted to the ctx-based ll2 queue. This feature also exposed a bug in the DIRECT_REG_WR64 macro implementation which didn't have an effect in other use cases. Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed.h | 3 +- drivers/net/ethernet/qlogic/qed/qed_dev.c | 20 ++-- drivers/net/ethernet/qlogic/qed/qed_hsi.h | 42 +++++++- drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 5 +- drivers/net/ethernet/qlogic/qed/qed_iwarp.c | 8 +- drivers/net/ethernet/qlogic/qed/qed_ll2.c | 149 ++++++++++++++++++++++++---- drivers/net/ethernet/qlogic/qed/qed_ll2.h | 14 +++ drivers/net/ethernet/qlogic/qed/qed_mcp.c | 5 +- include/linux/qed/common_hsi.h | 15 ++- include/linux/qed/qed_if.h | 4 +- include/linux/qed/qed_ll2_if.h | 7 ++ 11 files changed, 235 insertions(+), 37 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index 8ec46bf409c2..cfa45fb9a5a4 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -253,7 +253,8 @@ enum qed_resources { QED_VLAN, QED_RDMA_CNQ_RAM, QED_ILT, - QED_LL2_QUEUE, + QED_LL2_RAM_QUEUE, + QED_LL2_CTX_QUEUE, QED_CMDQS_CQS, QED_RDMA_STATS_QUEUE, QED_BDQ, diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 479d98e6187a..898c1f8d1530 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -3565,8 +3565,10 @@ const char *qed_hw_get_resc_name(enum qed_resources res_id) return "RDMA_CNQ_RAM"; case QED_ILT: return "ILT"; - case QED_LL2_QUEUE: - return "LL2_QUEUE"; + case QED_LL2_RAM_QUEUE: + return "LL2_RAM_QUEUE"; + case QED_LL2_CTX_QUEUE: + return "LL2_CTX_QUEUE"; case QED_CMDQS_CQS: return "CMDQS_CQS"; case QED_RDMA_STATS_QUEUE: @@ -3615,8 +3617,11 @@ qed_hw_set_soft_resc_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) for (res_id = 0; res_id < QED_MAX_RESC; res_id++) { switch (res_id) { - case QED_LL2_QUEUE: - resc_max_val = MAX_NUM_LL2_RX_QUEUES; + case QED_LL2_RAM_QUEUE: + resc_max_val = MAX_NUM_LL2_RX_RAM_QUEUES; + break; + case QED_LL2_CTX_QUEUE: + resc_max_val = MAX_NUM_LL2_RX_CTX_QUEUES; break; case QED_RDMA_CNQ_RAM: /* No need for a case for QED_CMDQS_CQS since @@ -3691,8 +3696,11 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, *p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 : PXP_NUM_ILT_RECORDS_BB) / num_funcs; break; - case QED_LL2_QUEUE: - *p_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs; + case QED_LL2_RAM_QUEUE: + *p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs; + break; + case QED_LL2_CTX_QUEUE: + *p_resc_num = MAX_NUM_LL2_RX_CTX_QUEUES / num_funcs; break; case QED_RDMA_CNQ_RAM: case QED_CMDQS_CQS: diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index cced2ce365de..f0e5195cae01 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -98,6 +98,7 @@ enum core_event_opcode { CORE_EVENT_RX_QUEUE_STOP, CORE_EVENT_RX_QUEUE_FLUSH, CORE_EVENT_TX_QUEUE_UPDATE, + CORE_EVENT_QUEUE_STATS_QUERY, MAX_CORE_EVENT_OPCODE }; @@ -116,7 +117,7 @@ struct core_ll2_port_stats { struct regpair gsi_crcchksm_error; }; -/* Ethernet TX Per Queue Stats */ +/* LL2 TX Per Queue Stats */ struct core_ll2_pstorm_per_queue_stat { struct regpair sent_ucast_bytes; struct regpair sent_mcast_bytes; @@ -124,13 +125,13 @@ struct core_ll2_pstorm_per_queue_stat { struct regpair sent_ucast_pkts; struct regpair sent_mcast_pkts; struct regpair sent_bcast_pkts; + struct regpair error_drop_pkts; }; /* Light-L2 RX Producers in Tstorm RAM */ struct core_ll2_rx_prod { __le16 bd_prod; __le16 cqe_prod; - __le32 reserved; }; struct core_ll2_tstorm_per_queue_stat { @@ -147,6 +148,18 @@ struct core_ll2_ustorm_per_queue_stat { struct regpair rcv_bcast_pkts; }; +/* Structure for doorbell data, in PWM mode, for RX producers update. */ +struct core_pwm_prod_update_data { + __le16 icid; /* internal CID */ + u8 reserved0; + u8 params; +#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_MASK 0x3 +#define CORE_PWM_PROD_UPDATE_DATA_AGG_CMD_SHIFT 0 +#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_MASK 0x3F /* Set 0 */ +#define CORE_PWM_PROD_UPDATE_DATA_RESERVED1_SHIFT 2 + struct core_ll2_rx_prod prod; /* Producers */ +}; + /* Core Ramrod Command IDs (light L2) */ enum core_ramrod_cmd_id { CORE_RAMROD_UNUSED, @@ -156,6 +169,7 @@ enum core_ramrod_cmd_id { CORE_RAMROD_TX_QUEUE_STOP, CORE_RAMROD_RX_QUEUE_FLUSH, CORE_RAMROD_TX_QUEUE_UPDATE, + CORE_RAMROD_QUEUE_STATS_QUERY, MAX_CORE_RAMROD_CMD_ID }; @@ -274,8 +288,11 @@ struct core_rx_start_ramrod_data { u8 mf_si_mcast_accept_all; struct core_rx_action_on_error action_on_error; u8 gsi_offload_flag; + u8 vport_id_valid; + u8 vport_id; + u8 zero_prod_flg; u8 wipe_inner_vlan_pri_en; - u8 reserved[5]; + u8 reserved[2]; }; /* Ramrod data for rx queue stop ramrod */ @@ -352,8 +369,11 @@ struct core_tx_start_ramrod_data { __le16 pbl_size; __le16 qm_pq_id; u8 gsi_offload_flag; + u8 ctx_stats_en; + u8 vport_id_valid; u8 vport_id; - u8 resrved[2]; + u8 enforce_security_flag; + u8 reserved[7]; }; /* Ramrod data for tx queue stop ramrod */ @@ -761,7 +781,7 @@ struct e4_tstorm_core_conn_ag_ctx { __le16 word1; __le16 word2; __le16 word3; - __le32 reg9; + __le32 ll2_rx_prod; __le32 reg10; }; @@ -844,6 +864,11 @@ struct ustorm_core_conn_st_ctx { __le32 reserved[4]; }; +/* The core storm context for the Tstorm */ +struct tstorm_core_conn_st_ctx { + __le32 reserved[4]; +}; + /* core connection context */ struct e4_core_conn_context { struct ystorm_core_conn_st_ctx ystorm_st_context; @@ -857,6 +882,8 @@ struct e4_core_conn_context { struct mstorm_core_conn_st_ctx mstorm_st_context; struct ustorm_core_conn_st_ctx ustorm_st_context; struct regpair ustorm_st_padding[2]; + struct tstorm_core_conn_st_ctx tstorm_st_context; + struct regpair tstorm_st_padding[2]; }; struct eth_mstorm_per_pf_stat { @@ -12483,6 +12510,11 @@ enum resource_id_enum { RESOURCE_LL2_QUEUE_E = 15, RESOURCE_RDMA_STATS_QUEUE_E = 16, RESOURCE_BDQ_E = 17, + RESOURCE_QCN_E = 18, + RESOURCE_LLH_FILTER_E = 19, + RESOURCE_VF_MAC_ADDR = 20, + RESOURCE_LL2_CQS_E = 21, + RESOURCE_VF_CNQS = 22, RESOURCE_MAX_NUM, RESOURCE_NUM_INVALID = 0xFFFFFFFF }; diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c index 5585c18053ec..8ee668d66b4b 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c @@ -213,8 +213,9 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn, p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring; p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring; p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring; - p_init->ll2_rx_queue_id = p_hwfn->hw_info.resc_start[QED_LL2_QUEUE] + - p_params->ll2_ooo_queue_id; + p_init->ll2_rx_queue_id = + p_hwfn->hw_info.resc_start[QED_LL2_RAM_QUEUE] + + p_params->ll2_ooo_queue_id; p_init->func_params.log_page_size = p_params->log_page_size; val = p_params->num_tasks; diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c index 65ec16a31658..d2fe61a5cf56 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c @@ -137,8 +137,8 @@ qed_iwarp_init_fw_ramrod(struct qed_hwfn *p_hwfn, struct iwarp_init_func_ramrod_data *p_ramrod) { p_ramrod->iwarp.ll2_ooo_q_index = - RESC_START(p_hwfn, QED_LL2_QUEUE) + - p_hwfn->p_rdma_info->iwarp.ll2_ooo_handle; + RESC_START(p_hwfn, QED_LL2_RAM_QUEUE) + + p_hwfn->p_rdma_info->iwarp.ll2_ooo_handle; p_ramrod->tcp.max_fin_rt = QED_IWARP_MAX_FIN_RT_DEFAULT; @@ -2651,6 +2651,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, memset(&data, 0, sizeof(data)); data.input.conn_type = QED_LL2_TYPE_IWARP; + /* SYN will use ctx based queues */ + data.input.rx_conn_type = QED_LL2_RX_TYPE_CTX; data.input.mtu = params->max_mtu; data.input.rx_num_desc = QED_IWARP_LL2_SYN_RX_SIZE; data.input.tx_num_desc = QED_IWARP_LL2_SYN_TX_SIZE; @@ -2683,6 +2685,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn, /* Start OOO connection */ data.input.conn_type = QED_LL2_TYPE_OOO; + /* OOO/unaligned will use legacy ll2 queues (ram based) */ + data.input.rx_conn_type = QED_LL2_RX_TYPE_LEGACY; data.input.mtu = params->max_mtu; n_ooo_bufs = (QED_IWARP_MAX_OOO * rcv_wnd_size) / diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c index 19a1a58d60f8..037e5978787e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c @@ -962,7 +962,7 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn, return rc; p_ramrod = &p_ent->ramrod.core_rx_queue_start; - + memset(p_ramrod, 0, sizeof(*p_ramrod)); p_ramrod->sb_id = cpu_to_le16(qed_int_get_sp_sb_id(p_hwfn)); p_ramrod->sb_index = p_rx->rx_sb_index; p_ramrod->complete_event_flg = 1; @@ -996,6 +996,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn, p_ramrod->action_on_error.error_type = action_on_error; p_ramrod->gsi_offload_flag = p_ll2_conn->input.gsi_enable; + p_ramrod->zero_prod_flg = 1; + return qed_spq_post(p_hwfn, p_ent, NULL); } @@ -1317,6 +1319,25 @@ qed_ll2_set_cbs(struct qed_ll2_info *p_ll2_info, const struct qed_ll2_cbs *cbs) return 0; } +static void _qed_ll2_calc_allowed_conns(struct qed_hwfn *p_hwfn, + struct qed_ll2_acquire_data *data, + u8 *start_idx, u8 *last_idx) +{ + /* LL2 queues handles will be split as follows: + * First will be the legacy queues, and then the ctx based. + */ + if (data->input.rx_conn_type == QED_LL2_RX_TYPE_LEGACY) { + *start_idx = QED_LL2_LEGACY_CONN_BASE_PF; + *last_idx = *start_idx + + QED_MAX_NUM_OF_LEGACY_LL2_CONNS_PF; + } else { + /* QED_LL2_RX_TYPE_CTX */ + *start_idx = QED_LL2_CTX_CONN_BASE_PF; + *last_idx = *start_idx + + QED_MAX_NUM_OF_CTX_LL2_CONNS_PF; + } +} + static enum core_error_handle qed_ll2_get_error_choice(enum qed_ll2_error_handle err) { @@ -1337,14 +1358,16 @@ int qed_ll2_acquire_connection(void *cxt, struct qed_ll2_acquire_data *data) struct qed_hwfn *p_hwfn = cxt; qed_int_comp_cb_t comp_rx_cb, comp_tx_cb; struct qed_ll2_info *p_ll2_info = NULL; - u8 i, *p_tx_max; + u8 i, first_idx, last_idx, *p_tx_max; int rc; if (!data->p_connection_handle || !p_hwfn->p_ll2_info) return -EINVAL; + _qed_ll2_calc_allowed_conns(p_hwfn, data, &first_idx, &last_idx); + /* Find a free connection to be used */ - for (i = 0; (i < QED_MAX_NUM_OF_LL2_CONNECTIONS); i++) { + for (i = first_idx; i < last_idx; i++) { mutex_lock(&p_hwfn->p_ll2_info[i].mutex); if (p_hwfn->p_ll2_info[i].b_active) { mutex_unlock(&p_hwfn->p_ll2_info[i].mutex); @@ -1448,6 +1471,7 @@ static int qed_ll2_establish_connection_rx(struct qed_hwfn *p_hwfn, enum qed_ll2_error_handle error_input; enum core_error_handle error_mode; u8 action_on_error = 0; + int rc; if (!QED_LL2_RX_REGISTERED(p_ll2_conn)) return 0; @@ -1461,7 +1485,18 @@ static int qed_ll2_establish_connection_rx(struct qed_hwfn *p_hwfn, error_mode = qed_ll2_get_error_choice(error_input); SET_FIELD(action_on_error, CORE_RX_ACTION_ON_ERROR_NO_BUFF, error_mode); - return qed_sp_ll2_rx_queue_start(p_hwfn, p_ll2_conn, action_on_error); + rc = qed_sp_ll2_rx_queue_start(p_hwfn, p_ll2_conn, action_on_error); + if (rc) + return rc; + + if (p_ll2_conn->rx_queue.ctx_based) { + rc = qed_db_recovery_add(p_hwfn->cdev, + p_ll2_conn->rx_queue.set_prod_addr, + &p_ll2_conn->rx_queue.db_data, + DB_REC_WIDTH_64B, DB_REC_KERNEL); + } + + return rc; } static void @@ -1475,13 +1510,41 @@ qed_ll2_establish_connection_ooo(struct qed_hwfn *p_hwfn, qed_ooo_submit_rx_buffers(p_hwfn, p_ll2_conn); } +static inline u8 qed_ll2_handle_to_queue_id(struct qed_hwfn *p_hwfn, + u8 handle, + u8 ll2_queue_type) +{ + u8 qid; + + if (ll2_queue_type == QED_LL2_RX_TYPE_LEGACY) + return p_hwfn->hw_info.resc_start[QED_LL2_RAM_QUEUE] + handle; + + /* QED_LL2_RX_TYPE_CTX + * FW distinguishes between the legacy queues (ram based) and the + * ctx based queues by the queue_id. + * The first MAX_NUM_LL2_RX_RAM_QUEUES queues are legacy + * and the queue ids above that are ctx base. + */ + qid = p_hwfn->hw_info.resc_start[QED_LL2_CTX_QUEUE] + + MAX_NUM_LL2_RX_RAM_QUEUES; + + /* See comment on the acquire connection for how the ll2 + * queues handles are divided. + */ + qid += (handle - QED_MAX_NUM_OF_LEGACY_LL2_CONNS_PF); + + return qid; +} + int qed_ll2_establish_connection(void *cxt, u8 connection_handle) { - struct qed_hwfn *p_hwfn = cxt; - struct qed_ll2_info *p_ll2_conn; + struct e4_core_conn_context *p_cxt; struct qed_ll2_tx_packet *p_pkt; + struct qed_ll2_info *p_ll2_conn; + struct qed_hwfn *p_hwfn = cxt; struct qed_ll2_rx_queue *p_rx; struct qed_ll2_tx_queue *p_tx; + struct qed_cxt_info cxt_info; struct qed_ptt *p_ptt; int rc = -EINVAL; u32 i, capacity; @@ -1539,13 +1602,46 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle) rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_CORE, &p_ll2_conn->cid); if (rc) goto out; + cxt_info.iid = p_ll2_conn->cid; + rc = qed_cxt_get_cid_info(p_hwfn, &cxt_info); + if (rc) { + DP_NOTICE(p_hwfn, "Cannot find context info for cid=%d\n", + p_ll2_conn->cid); + goto out; + } + + p_cxt = cxt_info.p_cxt; + + memset(p_cxt, 0, sizeof(*p_cxt)); - qid = p_hwfn->hw_info.resc_start[QED_LL2_QUEUE] + connection_handle; + qid = qed_ll2_handle_to_queue_id(p_hwfn, connection_handle, + p_ll2_conn->input.rx_conn_type); p_ll2_conn->queue_id = qid; p_ll2_conn->tx_stats_id = qid; - p_rx->set_prod_addr = (u8 __iomem *)p_hwfn->regview + - GTT_BAR0_MAP_REG_TSDM_RAM + - TSTORM_LL2_RX_PRODS_OFFSET(qid); + + DP_VERBOSE(p_hwfn, QED_MSG_LL2, + "Establishing ll2 queue. PF %d ctx_based=%d abs qid=%d\n", + p_hwfn->rel_pf_id, p_ll2_conn->input.rx_conn_type, qid); + + if (p_ll2_conn->input.rx_conn_type == QED_LL2_RX_TYPE_LEGACY) { + p_rx->set_prod_addr = p_hwfn->regview + + GTT_BAR0_MAP_REG_TSDM_RAM + TSTORM_LL2_RX_PRODS_OFFSET(qid); + } else { + /* QED_LL2_RX_TYPE_CTX - using doorbell */ + p_rx->ctx_based = 1; + + p_rx->set_prod_addr = p_hwfn->doorbells + + p_hwfn->dpi_start_offset + + DB_ADDR_SHIFT(DQ_PWM_OFFSET_TCM_LL2_PROD_UPDATE); + + /* prepare db data */ + p_rx->db_data.icid = cpu_to_le16((u16)p_ll2_conn->cid); + SET_FIELD(p_rx->db_data.params, + CORE_PWM_PROD_UPDATE_DATA_AGG_CMD, DB_AGG_CMD_SET); + SET_FIELD(p_rx->db_data.params, + CORE_PWM_PROD_UPDATE_DATA_RESERVED1, 0); + } + p_tx->doorbell_addr = (u8 __iomem *)p_hwfn->doorbells + qed_db_addr(p_ll2_conn->cid, DQ_DEMS_LEGACY); @@ -1556,7 +1652,6 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle) DQ_XCM_CORE_TX_BD_PROD_CMD); p_tx->db_msg.agg_flags = DQ_XCM_CORE_DQ_CF_CMD; - rc = qed_ll2_establish_connection_rx(p_hwfn, p_ll2_conn); if (rc) goto out; @@ -1590,7 +1685,7 @@ static void qed_ll2_post_rx_buffer_notify_fw(struct qed_hwfn *p_hwfn, struct qed_ll2_rx_packet *p_curp) { struct qed_ll2_rx_packet *p_posting_packet = NULL; - struct core_ll2_rx_prod rx_prod = { 0, 0, 0 }; + struct core_ll2_rx_prod rx_prod = { 0, 0 }; bool b_notify_fw = false; u16 bd_prod, cq_prod; @@ -1615,13 +1710,27 @@ static void qed_ll2_post_rx_buffer_notify_fw(struct qed_hwfn *p_hwfn, bd_prod = qed_chain_get_prod_idx(&p_rx->rxq_chain); cq_prod = qed_chain_get_prod_idx(&p_rx->rcq_chain); - rx_prod.bd_prod = cpu_to_le16(bd_prod); - rx_prod.cqe_prod = cpu_to_le16(cq_prod); + if (p_rx->ctx_based) { + /* update producer by giving a doorbell */ + p_rx->db_data.prod.bd_prod = cpu_to_le16(bd_prod); + p_rx->db_data.prod.cqe_prod = cpu_to_le16(cq_prod); + /* Make sure chain element is updated before ringing the + * doorbell + */ + dma_wmb(); + DIRECT_REG_WR64(p_rx->set_prod_addr, + *((u64 *)&p_rx->db_data)); + } else { + rx_prod.bd_prod = cpu_to_le16(bd_prod); + rx_prod.cqe_prod = cpu_to_le16(cq_prod); - /* Make sure chain element is updated before ringing the doorbell */ - dma_wmb(); + /* Make sure chain element is updated before ringing the + * doorbell + */ + dma_wmb(); - DIRECT_REG_WR(p_rx->set_prod_addr, *((u32 *)&rx_prod)); + DIRECT_REG_WR(p_rx->set_prod_addr, *((u32 *)&rx_prod)); + } } int qed_ll2_post_rx_buffer(void *cxt, @@ -1965,6 +2074,12 @@ int qed_ll2_terminate_connection(void *cxt, u8 connection_handle) if (QED_LL2_RX_REGISTERED(p_ll2_conn)) { p_ll2_conn->rx_queue.b_cb_registered = false; smp_wmb(); /* Make sure this is seen by ll2_lb_rxq_completion */ + + if (p_ll2_conn->rx_queue.ctx_based) + qed_db_recovery_del(p_hwfn->cdev, + p_ll2_conn->rx_queue.set_prod_addr, + &p_ll2_conn->rx_queue.db_data); + rc = qed_sp_ll2_rx_queue_stop(p_hwfn, p_ll2_conn); if (rc) goto out; diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.h b/drivers/net/ethernet/qlogic/qed/qed_ll2.h index 5f01fbd3c073..288642d526b7 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_ll2.h +++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.h @@ -46,6 +46,18 @@ #include "qed_sp.h" #define QED_MAX_NUM_OF_LL2_CONNECTIONS (4) +/* LL2 queues handles will be split as follows: + * first will be legacy queues, and then the ctx based queues. + */ +#define QED_MAX_NUM_OF_LL2_CONNS_PF (4) +#define QED_MAX_NUM_OF_LEGACY_LL2_CONNS_PF (3) + +#define QED_MAX_NUM_OF_CTX_LL2_CONNS_PF \ + (QED_MAX_NUM_OF_LL2_CONNS_PF - QED_MAX_NUM_OF_LEGACY_LL2_CONNS_PF) + +#define QED_LL2_LEGACY_CONN_BASE_PF 0 +#define QED_LL2_CTX_CONN_BASE_PF QED_MAX_NUM_OF_LEGACY_LL2_CONNS_PF + struct qed_ll2_rx_packet { struct list_head list_entry; @@ -79,6 +91,7 @@ struct qed_ll2_rx_queue { struct qed_chain rxq_chain; struct qed_chain rcq_chain; u8 rx_sb_index; + u8 ctx_based; bool b_cb_registered; __le16 *p_fw_cons; struct list_head active_descq; @@ -86,6 +99,7 @@ struct qed_ll2_rx_queue { struct list_head posting_descq; struct qed_ll2_rx_packet *descq_array; void __iomem *set_prod_addr; + struct core_pwm_prod_update_data db_data; }; struct qed_ll2_tx_queue { diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c index 36ddb89856a8..aeac359941de 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c @@ -3261,9 +3261,12 @@ static enum resource_id_enum qed_mcp_get_mfw_res_id(enum qed_resources res_id) case QED_ILT: mfw_res_id = RESOURCE_ILT_E; break; - case QED_LL2_QUEUE: + case QED_LL2_RAM_QUEUE: mfw_res_id = RESOURCE_LL2_QUEUE_E; break; + case QED_LL2_CTX_QUEUE: + mfw_res_id = RESOURCE_LL2_CQS_E; + break; case QED_RDMA_CNQ_RAM: case QED_CMDQS_CQS: /* CNQ/CMDQS are the same resource */ diff --git a/include/linux/qed/common_hsi.h b/include/linux/qed/common_hsi.h index 3f437e826a4c..a2b7826b36f0 100644 --- a/include/linux/qed/common_hsi.h +++ b/include/linux/qed/common_hsi.h @@ -105,8 +105,15 @@ #define CORE_SPQE_PAGE_SIZE_BYTES 4096 -#define MAX_NUM_LL2_RX_QUEUES 48 -#define MAX_NUM_LL2_TX_STATS_COUNTERS 48 +/* Number of LL2 RAM based queues */ +#define MAX_NUM_LL2_RX_RAM_QUEUES 32 + +/* Number of LL2 context based queues */ +#define MAX_NUM_LL2_RX_CTX_QUEUES 208 +#define MAX_NUM_LL2_RX_QUEUES \ + (MAX_NUM_LL2_RX_RAM_QUEUES + MAX_NUM_LL2_RX_CTX_QUEUES) + +#define MAX_NUM_LL2_TX_STATS_COUNTERS 48 #define FW_MAJOR_VERSION 8 #define FW_MINOR_VERSION 42 @@ -340,6 +347,10 @@ #define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 1) #define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 3) +/* DQ_DEMS_AGG_VAL_BASE */ +#define DQ_PWM_OFFSET_TCM_LL2_PROD_UPDATE \ + (DQ_PWM_OFFSET_TCM32_BASE + DQ_TCM_AGG_VAL_SEL_REG9 - 4) + #define DQ_REGION_SHIFT (12) /* DPM */ diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h index b5db1ee96d78..9bcb2f419004 100644 --- a/include/linux/qed/qed_if.h +++ b/include/linux/qed/qed_if.h @@ -463,7 +463,7 @@ enum qed_db_rec_space { #define DIRECT_REG_RD(reg_addr) readl((void __iomem *)(reg_addr)) -#define DIRECT_REG_WR64(reg_addr, val) writeq((u32)val, \ +#define DIRECT_REG_WR64(reg_addr, val) writeq((u64)val, \ (void __iomem *)(reg_addr)) #define QED_COALESCE_MAX 0x1FF @@ -1177,6 +1177,8 @@ struct qed_common_ops { #define GET_FIELD(value, name) \ (((value) >> (name ## _SHIFT)) & name ## _MASK) +#define DB_ADDR_SHIFT(addr) ((addr) << DB_PWM_ADDR_OFFSET_SHIFT) + /* Debug print definitions */ #define DP_ERR(cdev, fmt, ...) \ do { \ diff --git a/include/linux/qed/qed_ll2_if.h b/include/linux/qed/qed_ll2_if.h index 5eb022953aca..1313c34d9a68 100644 --- a/include/linux/qed/qed_ll2_if.h +++ b/include/linux/qed/qed_ll2_if.h @@ -52,6 +52,12 @@ enum qed_ll2_conn_type { QED_LL2_TYPE_ROCE, QED_LL2_TYPE_IWARP, QED_LL2_TYPE_RESERVED3, + MAX_QED_LL2_CONN_TYPE +}; + +enum qed_ll2_rx_conn_type { + QED_LL2_RX_TYPE_LEGACY, + QED_LL2_RX_TYPE_CTX, MAX_QED_LL2_RX_CONN_TYPE }; @@ -165,6 +171,7 @@ struct qed_ll2_cbs { }; struct qed_ll2_acquire_data_inputs { + enum qed_ll2_rx_conn_type rx_conn_type; enum qed_ll2_conn_type conn_type; u16 mtu; u16 rx_num_desc; -- cgit v1.2.3-71-gd317 From 1392d19ff1d6ddd370cefa73b552a0262f9c35ea Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:13 +0200 Subject: qed: Add abstraction for different hsi values per chip The number of BTB blocks was modified to be different between the two chip flavors supported (BB/K2) as a result, this lead to a re-write of selecting the default hsi value based on the chip. This patch creates a lookup table for hsi values per chip rather than ask again and again for every value. Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed.h | 56 +++++++++++++++++++++++----- drivers/net/ethernet/qlogic/qed/qed_dev.c | 62 +++++++++++++++++++++---------- include/linux/qed/common_hsi.h | 4 +- 3 files changed, 90 insertions(+), 32 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index cfa45fb9a5a4..cfbfe7441ecb 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -532,6 +532,23 @@ struct qed_nvm_image_info { bool valid; }; +enum qed_hsi_def_type { + QED_HSI_DEF_MAX_NUM_VFS, + QED_HSI_DEF_MAX_NUM_L2_QUEUES, + QED_HSI_DEF_MAX_NUM_PORTS, + QED_HSI_DEF_MAX_SB_PER_PATH, + QED_HSI_DEF_MAX_NUM_PFS, + QED_HSI_DEF_MAX_NUM_VPORTS, + QED_HSI_DEF_NUM_ETH_RSS_ENGINE, + QED_HSI_DEF_MAX_QM_TX_QUEUES, + QED_HSI_DEF_NUM_PXP_ILT_RECORDS, + QED_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS, + QED_HSI_DEF_MAX_QM_GLOBAL_RLS, + QED_HSI_DEF_MAX_PBF_CMD_LINES, + QED_HSI_DEF_MAX_BTB_BLOCKS, + QED_NUM_HSI_DEFS +}; + #define DRV_MODULE_VERSION \ __stringify(QED_MAJOR_VERSION) "." \ __stringify(QED_MINOR_VERSION) "." \ @@ -869,16 +886,35 @@ struct qed_dev { bool iwarp_cmt; }; -#define NUM_OF_VFS(dev) (QED_IS_BB(dev) ? MAX_NUM_VFS_BB \ - : MAX_NUM_VFS_K2) -#define NUM_OF_L2_QUEUES(dev) (QED_IS_BB(dev) ? MAX_NUM_L2_QUEUES_BB \ - : MAX_NUM_L2_QUEUES_K2) -#define NUM_OF_PORTS(dev) (QED_IS_BB(dev) ? MAX_NUM_PORTS_BB \ - : MAX_NUM_PORTS_K2) -#define NUM_OF_SBS(dev) (QED_IS_BB(dev) ? MAX_SB_PER_PATH_BB \ - : MAX_SB_PER_PATH_K2) -#define NUM_OF_ENG_PFS(dev) (QED_IS_BB(dev) ? MAX_NUM_PFS_BB \ - : MAX_NUM_PFS_K2) +u32 qed_get_hsi_def_val(struct qed_dev *cdev, enum qed_hsi_def_type type); + +#define NUM_OF_VFS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_NUM_VFS) +#define NUM_OF_L2_QUEUES(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_NUM_L2_QUEUES) +#define NUM_OF_PORTS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_NUM_PORTS) +#define NUM_OF_SBS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_SB_PER_PATH) +#define NUM_OF_ENG_PFS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_NUM_PFS) +#define NUM_OF_VPORTS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_NUM_VPORTS) +#define NUM_OF_RSS_ENGINES(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_NUM_ETH_RSS_ENGINE) +#define NUM_OF_QM_TX_QUEUES(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_QM_TX_QUEUES) +#define NUM_OF_PXP_ILT_RECORDS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_NUM_PXP_ILT_RECORDS) +#define NUM_OF_RDMA_STATISTIC_COUNTERS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_NUM_RDMA_STATISTIC_COUNTERS) +#define NUM_OF_QM_GLOBAL_RLS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_QM_GLOBAL_RLS) +#define NUM_OF_PBF_CMD_LINES(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_PBF_CMD_LINES) +#define NUM_OF_BTB_BLOCKS(dev) \ + qed_get_hsi_def_val(dev, QED_HSI_DEF_MAX_BTB_BLOCKS) + /** * @brief qed_concrete_to_sw_fid - get the sw function id from diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 898c1f8d1530..e3e0376c13d6 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -1579,6 +1579,7 @@ static void qed_init_qm_port_params(struct qed_hwfn *p_hwfn) { /* Initialize qm port parameters */ u8 i, active_phys_tcs, num_ports = p_hwfn->cdev->num_ports_in_engine; + struct qed_dev *cdev = p_hwfn->cdev; /* indicate how ooo and high pri traffic is dealt with */ active_phys_tcs = num_ports == MAX_NUM_PORTS_K2 ? @@ -1588,11 +1589,13 @@ static void qed_init_qm_port_params(struct qed_hwfn *p_hwfn) for (i = 0; i < num_ports; i++) { struct init_qm_port_params *p_qm_port = &p_hwfn->qm_info.qm_port_params[i]; + u16 pbf_max_cmd_lines; p_qm_port->active = 1; p_qm_port->active_phys_tcs = active_phys_tcs; - p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports; - p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports; + pbf_max_cmd_lines = (u16)NUM_OF_PBF_CMD_LINES(cdev); + p_qm_port->num_pbf_cmd_lines = pbf_max_cmd_lines / num_ports; + p_qm_port->num_btb_blocks = NUM_OF_BTB_BLOCKS(cdev) / num_ports; } } @@ -3607,14 +3610,39 @@ __qed_hw_set_soft_resc_size(struct qed_hwfn *p_hwfn, return 0; } +static u32 qed_hsi_def_val[][MAX_CHIP_IDS] = { + {MAX_NUM_VFS_BB, MAX_NUM_VFS_K2}, + {MAX_NUM_L2_QUEUES_BB, MAX_NUM_L2_QUEUES_K2}, + {MAX_NUM_PORTS_BB, MAX_NUM_PORTS_K2}, + {MAX_SB_PER_PATH_BB, MAX_SB_PER_PATH_K2,}, + {MAX_NUM_PFS_BB, MAX_NUM_PFS_K2}, + {MAX_NUM_VPORTS_BB, MAX_NUM_VPORTS_K2}, + {ETH_RSS_ENGINE_NUM_BB, ETH_RSS_ENGINE_NUM_K2}, + {MAX_QM_TX_QUEUES_BB, MAX_QM_TX_QUEUES_K2}, + {PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2}, + {RDMA_NUM_STATISTIC_COUNTERS_BB, RDMA_NUM_STATISTIC_COUNTERS_K2}, + {MAX_QM_GLOBAL_RLS, MAX_QM_GLOBAL_RLS}, + {PBF_MAX_CMD_LINES, PBF_MAX_CMD_LINES}, + {BTB_MAX_BLOCKS_BB, BTB_MAX_BLOCKS_K2}, +}; + +u32 qed_get_hsi_def_val(struct qed_dev *cdev, enum qed_hsi_def_type type) +{ + enum chip_ids chip_id = QED_IS_BB(cdev) ? CHIP_BB : CHIP_K2; + + if (type >= QED_NUM_HSI_DEFS) { + DP_ERR(cdev, "Unexpected HSI definition type [%d]\n", type); + return 0; + } + + return qed_hsi_def_val[type][chip_id]; +} static int qed_hw_set_soft_resc_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) { - bool b_ah = QED_IS_AH(p_hwfn->cdev); u32 resc_max_val, mcp_resp; u8 res_id; int rc; - for (res_id = 0; res_id < QED_MAX_RESC; res_id++) { switch (res_id) { case QED_LL2_RAM_QUEUE: @@ -3630,8 +3658,8 @@ qed_hw_set_soft_resc_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) resc_max_val = NUM_OF_GLOBAL_QUEUES; break; case QED_RDMA_STATS_QUEUE: - resc_max_val = b_ah ? RDMA_NUM_STATISTIC_COUNTERS_K2 - : RDMA_NUM_STATISTIC_COUNTERS_BB; + resc_max_val = + NUM_OF_RDMA_STATISTIC_COUNTERS(p_hwfn->cdev); break; case QED_BDQ: resc_max_val = BDQ_NUM_RESOURCES; @@ -3664,28 +3692,24 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, u32 *p_resc_num, u32 *p_resc_start) { u8 num_funcs = p_hwfn->num_funcs_on_engine; - bool b_ah = QED_IS_AH(p_hwfn->cdev); + struct qed_dev *cdev = p_hwfn->cdev; switch (res_id) { case QED_L2_QUEUE: - *p_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 : - MAX_NUM_L2_QUEUES_BB) / num_funcs; + *p_resc_num = NUM_OF_L2_QUEUES(cdev) / num_funcs; break; case QED_VPORT: - *p_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 : - MAX_NUM_VPORTS_BB) / num_funcs; + *p_resc_num = NUM_OF_VPORTS(cdev) / num_funcs; break; case QED_RSS_ENG: - *p_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 : - ETH_RSS_ENGINE_NUM_BB) / num_funcs; + *p_resc_num = NUM_OF_RSS_ENGINES(cdev) / num_funcs; break; case QED_PQ: - *p_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 : - MAX_QM_TX_QUEUES_BB) / num_funcs; + *p_resc_num = NUM_OF_QM_TX_QUEUES(cdev) / num_funcs; *p_resc_num &= ~0x7; /* The granularity of the PQs is 8 */ break; case QED_RL: - *p_resc_num = MAX_QM_GLOBAL_RLS / num_funcs; + *p_resc_num = NUM_OF_QM_GLOBAL_RLS(cdev) / num_funcs; break; case QED_MAC: case QED_VLAN: @@ -3693,8 +3717,7 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, *p_resc_num = ETH_NUM_MAC_FILTERS / num_funcs; break; case QED_ILT: - *p_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 : - PXP_NUM_ILT_RECORDS_BB) / num_funcs; + *p_resc_num = NUM_OF_PXP_ILT_RECORDS(cdev) / num_funcs; break; case QED_LL2_RAM_QUEUE: *p_resc_num = MAX_NUM_LL2_RX_RAM_QUEUES / num_funcs; @@ -3708,8 +3731,7 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, *p_resc_num = NUM_OF_GLOBAL_QUEUES / num_funcs; break; case QED_RDMA_STATS_QUEUE: - *p_resc_num = (b_ah ? RDMA_NUM_STATISTIC_COUNTERS_K2 : - RDMA_NUM_STATISTIC_COUNTERS_BB) / num_funcs; + *p_resc_num = NUM_OF_RDMA_STATISTIC_COUNTERS(cdev) / num_funcs; break; case QED_BDQ: if (p_hwfn->hw_info.personality != QED_PCI_ISCSI && diff --git a/include/linux/qed/common_hsi.h b/include/linux/qed/common_hsi.h index a2b7826b36f0..718ce72e5965 100644 --- a/include/linux/qed/common_hsi.h +++ b/include/linux/qed/common_hsi.h @@ -663,8 +663,8 @@ #define PBF_MAX_CMD_LINES 3328 /* Number of BTB blocks. Each block is 256B. */ -#define BTB_MAX_BLOCKS 1440 - +#define BTB_MAX_BLOCKS_BB 1440 +#define BTB_MAX_BLOCKS_K2 1840 /*****************/ /* PRS CONSTANTS */ /*****************/ -- cgit v1.2.3-71-gd317 From 6459d93619b5bc21f775e7eb12bc4d051743d7aa Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:14 +0200 Subject: qed: FW 8.42.2.0 iscsi/fcoe changes - Remove struct iscsi_slow_path_hdr and field fw_cid from several structs - Remove struct iscsi_spe_func_dstry - Remove fields pbe_page_size_log and pbl_page_size_log from struct iscsi_conn_offload_param Signed-off-by: Manish Rangankar Signed-off-by: Saurav Kashyap Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed_fcoe.c | 2 + drivers/net/ethernet/qlogic/qed/qed_hsi.h | 8 ++-- drivers/net/ethernet/qlogic/qed/qed_iscsi.c | 31 -------------- drivers/net/ethernet/qlogic/qed/qed_sp.h | 2 - include/linux/qed/iscsi_common.h | 64 +++++++++++------------------ include/linux/qed/storage_common.h | 3 +- 6 files changed, 32 insertions(+), 78 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c index de31a382f58e..4c7fa391fd33 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c +++ b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c @@ -167,6 +167,8 @@ qed_sp_fcoe_func_start(struct qed_hwfn *p_hwfn, goto err; } p_cxt = cxt_info.p_cxt; + memset(p_cxt, 0, sizeof(*p_cxt)); + SET_FIELD(p_cxt->tstorm_ag_context.flags3, E4_TSTORM_FCOE_CONN_AG_CTX_DUMMY_TIMER_CF_EN, 1); diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index f0e5195cae01..0c3d714e6518 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -11496,8 +11496,8 @@ struct e4_tstorm_iscsi_conn_ag_ctx { u8 flags3; #define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_Q0_MASK 0x3 #define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_Q0_SHIFT 0 -#define E4_TSTORM_ISCSI_CONN_AG_CTX_CF10_MASK 0x3 -#define E4_TSTORM_ISCSI_CONN_AG_CTX_CF10_SHIFT 2 +#define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_OOO_ISLES_CF_MASK 0x3 +#define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_OOO_ISLES_CF_SHIFT 2 #define E4_TSTORM_ISCSI_CONN_AG_CTX_CF0EN_MASK 0x1 #define E4_TSTORM_ISCSI_CONN_AG_CTX_CF0EN_SHIFT 4 #define E4_TSTORM_ISCSI_CONN_AG_CTX_P2T_FLUSH_CF_EN_MASK 0x1 @@ -11519,8 +11519,8 @@ struct e4_tstorm_iscsi_conn_ag_ctx { #define E4_TSTORM_ISCSI_CONN_AG_CTX_CF8EN_SHIFT 4 #define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_Q0_EN_MASK 0x1 #define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT 5 -#define E4_TSTORM_ISCSI_CONN_AG_CTX_CF10EN_MASK 0x1 -#define E4_TSTORM_ISCSI_CONN_AG_CTX_CF10EN_SHIFT 6 +#define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_OOO_ISLES_CF_EN_MASK 0x1 +#define E4_TSTORM_ISCSI_CONN_AG_CTX_FLUSH_OOO_ISLES_CF_EN_SHIFT 6 #define E4_TSTORM_ISCSI_CONN_AG_CTX_RULE0EN_MASK 0x1 #define E4_TSTORM_ISCSI_CONN_AG_CTX_RULE0EN_SHIFT 7 u8 flags5; diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c index 8ee668d66b4b..7245a615517a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c +++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c @@ -204,10 +204,6 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn, return -EINVAL; } - SET_FIELD(p_init->hdr.flags, - ISCSI_SLOW_PATH_HDR_LAYER_CODE, ISCSI_SLOW_PATH_LAYER_CODE); - p_init->hdr.op_code = ISCSI_RAMROD_CMD_ID_INIT_FUNC; - val = p_params->half_way_close_timeout; p_init->half_way_close_timeout = cpu_to_le16(val); p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring; @@ -332,12 +328,7 @@ static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn, p_conn->physical_q1 = cpu_to_le16(physical_q); p_ramrod->iscsi.physical_q1 = cpu_to_le16(physical_q); - p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN; - SET_FIELD(p_ramrod->hdr.flags, ISCSI_SLOW_PATH_HDR_LAYER_CODE, - p_conn->layer_code); - p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id); - p_ramrod->fw_cid = cpu_to_le32(p_conn->icid); DMA_REGPAIR_LE(p_ramrod->iscsi.sq_pbl_addr, p_conn->sq_pbl_addr); @@ -493,12 +484,8 @@ static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn, return rc; p_ramrod = &p_ent->ramrod.iscsi_conn_update; - p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_UPDATE_CONN; - SET_FIELD(p_ramrod->hdr.flags, - ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code); p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id); - p_ramrod->fw_cid = cpu_to_le32(p_conn->icid); p_ramrod->flags = p_conn->update_flag; p_ramrod->max_seq_size = cpu_to_le32(p_conn->max_seq_size); dval = p_conn->max_recv_pdu_length; @@ -538,12 +525,8 @@ qed_sp_iscsi_mac_update(struct qed_hwfn *p_hwfn, return rc; p_ramrod = &p_ent->ramrod.iscsi_conn_mac_update; - p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_MAC_UPDATE; - SET_FIELD(p_ramrod->hdr.flags, - ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code); p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id); - p_ramrod->fw_cid = cpu_to_le32(p_conn->icid); ucval = p_conn->remote_mac[1]; ((u8 *)(&p_ramrod->remote_mac_addr_hi))[0] = ucval; ucval = p_conn->remote_mac[0]; @@ -584,12 +567,8 @@ static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn, return rc; p_ramrod = &p_ent->ramrod.iscsi_conn_terminate; - p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_TERMINATION_CONN; - SET_FIELD(p_ramrod->hdr.flags, - ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code); p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id); - p_ramrod->fw_cid = cpu_to_le32(p_conn->icid); p_ramrod->abortive = p_conn->abortive_dsconnect; DMA_REGPAIR_LE(p_ramrod->query_params_addr, @@ -604,7 +583,6 @@ static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn, enum spq_mode comp_mode, struct qed_spq_comp_cb *p_comp_addr) { - struct iscsi_slow_path_hdr *p_ramrod = NULL; struct qed_spq_entry *p_ent = NULL; struct qed_sp_init_data init_data; int rc = -EINVAL; @@ -622,11 +600,6 @@ static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn, if (rc) return rc; - p_ramrod = &p_ent->ramrod.iscsi_empty; - p_ramrod->op_code = ISCSI_RAMROD_CMD_ID_CLEAR_SQ; - SET_FIELD(p_ramrod->flags, - ISCSI_SLOW_PATH_HDR_LAYER_CODE, p_conn->layer_code); - return qed_spq_post(p_hwfn, p_ent, NULL); } @@ -634,7 +607,6 @@ static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn, enum spq_mode comp_mode, struct qed_spq_comp_cb *p_comp_addr) { - struct iscsi_spe_func_dstry *p_ramrod = NULL; struct qed_spq_entry *p_ent = NULL; struct qed_sp_init_data init_data; int rc = 0; @@ -652,9 +624,6 @@ static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn, if (rc) return rc; - p_ramrod = &p_ent->ramrod.iscsi_destroy; - p_ramrod->hdr.op_code = ISCSI_RAMROD_CMD_ID_DESTROY_FUNC; - rc = qed_spq_post(p_hwfn, p_ent, NULL); qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_ISCSI); diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp.h b/drivers/net/ethernet/qlogic/qed/qed_sp.h index 96ab77ae6af5..b7b4fbbbccfe 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_sp.h @@ -120,9 +120,7 @@ union ramrod_data { struct fcoe_conn_terminate_ramrod_params fcoe_conn_terminate; struct fcoe_stat_ramrod_params fcoe_stat; - struct iscsi_slow_path_hdr iscsi_empty; struct iscsi_init_ramrod_params iscsi_init; - struct iscsi_spe_func_dstry iscsi_destroy; struct iscsi_spe_conn_offload iscsi_conn_offload; struct iscsi_conn_update_ramrod_params iscsi_conn_update; struct iscsi_spe_conn_mac_update iscsi_conn_mac_update; diff --git a/include/linux/qed/iscsi_common.h b/include/linux/qed/iscsi_common.h index 66aba505ec56..2f0a771a9176 100644 --- a/include/linux/qed/iscsi_common.h +++ b/include/linux/qed/iscsi_common.h @@ -999,7 +999,6 @@ struct iscsi_conn_offload_params { struct regpair r2tq_pbl_addr; struct regpair xhq_pbl_addr; struct regpair uhq_pbl_addr; - __le32 initial_ack; __le16 physical_q0; __le16 physical_q1; u8 flags; @@ -1011,10 +1010,10 @@ struct iscsi_conn_offload_params { #define ISCSI_CONN_OFFLOAD_PARAMS_RESTRICTED_MODE_SHIFT 2 #define ISCSI_CONN_OFFLOAD_PARAMS_RESERVED1_MASK 0x1F #define ISCSI_CONN_OFFLOAD_PARAMS_RESERVED1_SHIFT 3 - u8 pbl_page_size_log; - u8 pbe_page_size_log; u8 default_cq; + __le16 reserved0; __le32 stat_sn; + __le32 initial_ack; }; /* iSCSI connection statistics */ @@ -1029,25 +1028,14 @@ struct iscsi_conn_stats_params { __le32 reserved; }; -/* spe message header */ -struct iscsi_slow_path_hdr { - u8 op_code; - u8 flags; -#define ISCSI_SLOW_PATH_HDR_RESERVED0_MASK 0xF -#define ISCSI_SLOW_PATH_HDR_RESERVED0_SHIFT 0 -#define ISCSI_SLOW_PATH_HDR_LAYER_CODE_MASK 0x7 -#define ISCSI_SLOW_PATH_HDR_LAYER_CODE_SHIFT 4 -#define ISCSI_SLOW_PATH_HDR_RESERVED1_MASK 0x1 -#define ISCSI_SLOW_PATH_HDR_RESERVED1_SHIFT 7 -}; /* iSCSI connection update params passed by driver to FW in ISCSI update *ramrod. */ struct iscsi_conn_update_ramrod_params { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; u8 flags; #define ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN_MASK 0x1 #define ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN_SHIFT 0 @@ -1065,7 +1053,7 @@ struct iscsi_conn_update_ramrod_params { #define ISCSI_CONN_UPDATE_RAMROD_PARAMS_DIF_ON_IMM_EN_SHIFT 6 #define ISCSI_CONN_UPDATE_RAMROD_PARAMS_LUN_MAPPER_EN_MASK 0x1 #define ISCSI_CONN_UPDATE_RAMROD_PARAMS_LUN_MAPPER_EN_SHIFT 7 - u8 reserved0[3]; + u8 reserved3[3]; __le32 max_seq_size; __le32 max_send_pdu_length; __le32 max_recv_pdu_length; @@ -1251,22 +1239,22 @@ enum iscsi_ramrod_cmd_id { /* iSCSI connection termination request */ struct iscsi_spe_conn_mac_update { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; __le16 remote_mac_addr_lo; __le16 remote_mac_addr_mid; __le16 remote_mac_addr_hi; - u8 reserved0[2]; + u8 reserved2[2]; }; /* iSCSI and TCP connection (Option 1) offload params passed by driver to FW in * iSCSI offload ramrod. */ struct iscsi_spe_conn_offload { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; struct iscsi_conn_offload_params iscsi; struct tcp_offload_params tcp; }; @@ -1275,44 +1263,36 @@ struct iscsi_spe_conn_offload { * iSCSI offload ramrod. */ struct iscsi_spe_conn_offload_option2 { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; struct iscsi_conn_offload_params iscsi; struct tcp_offload_params_opt2 tcp; }; /* iSCSI collect connection statistics request */ struct iscsi_spe_conn_statistics { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; u8 reset_stats; - u8 reserved0[7]; + u8 reserved2[7]; struct regpair stats_cnts_addr; }; /* iSCSI connection termination request */ struct iscsi_spe_conn_termination { - struct iscsi_slow_path_hdr hdr; + __le16 reserved0; __le16 conn_id; - __le32 fw_cid; + __le32 reserved1; u8 abortive; - u8 reserved0[7]; + u8 reserved2[7]; struct regpair queue_cnts_addr; struct regpair query_params_addr; }; -/* iSCSI firmware function destroy parameters */ -struct iscsi_spe_func_dstry { - struct iscsi_slow_path_hdr hdr; - __le16 reserved0; - __le32 reserved1; -}; - /* iSCSI firmware function init parameters */ struct iscsi_spe_func_init { - struct iscsi_slow_path_hdr hdr; __le16 half_way_close_timeout; u8 num_sq_pages_in_ring; u8 num_r2tq_pages_in_ring; @@ -1324,8 +1304,12 @@ struct iscsi_spe_func_init { #define ISCSI_SPE_FUNC_INIT_RESERVED0_MASK 0x7F #define ISCSI_SPE_FUNC_INIT_RESERVED0_SHIFT 1 struct iscsi_debug_modes debug_mode; - __le16 reserved1; - __le32 reserved2; + u8 params; +#define ISCSI_SPE_FUNC_INIT_MAX_SYN_RT_MASK 0xF +#define ISCSI_SPE_FUNC_INIT_MAX_SYN_RT_SHIFT 0 +#define ISCSI_SPE_FUNC_INIT_RESERVED1_MASK 0xF +#define ISCSI_SPE_FUNC_INIT_RESERVED1_SHIFT 4 + u8 reserved2[7]; struct scsi_init_func_params func_params; struct scsi_init_func_queues q_params; }; diff --git a/include/linux/qed/storage_common.h b/include/linux/qed/storage_common.h index 505c0b48a761..9a973ffbbff5 100644 --- a/include/linux/qed/storage_common.h +++ b/include/linux/qed/storage_common.h @@ -107,8 +107,9 @@ struct scsi_drv_cmdq { struct scsi_init_func_params { __le16 num_tasks; u8 log_page_size; + u8 log_page_size_conn; u8 debug_mode; - u8 reserved2[12]; + u8 reserved2[11]; }; /* SCSI RQ/CQ/CMDQ firmware function init parameters */ -- cgit v1.2.3-71-gd317 From 0500a70d6e071040ffdaadebb966986afa83c5e9 Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:15 +0200 Subject: qed: FW 8.42.2.0 HSI changes This patch contains several HSI changes. The changes are part of features like RDMA VF and OVS, the patch also contains a fix to how the init code determines if the dmae is ready to be used. Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed_hsi.h | 317 ++++++++++++++++--------- drivers/net/ethernet/qlogic/qed/qed_init_ops.c | 6 +- drivers/net/ethernet/qlogic/qed/qed_mcp.c | 2 + drivers/net/ethernet/qlogic/qed/qed_roce.c | 2 +- drivers/net/ethernet/qlogic/qede/qede_fp.c | 8 +- include/linux/qed/common_hsi.h | 21 +- include/linux/qed/eth_common.h | 78 ++++-- 7 files changed, 280 insertions(+), 154 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index 0c3d714e6518..06b94508115a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -250,7 +250,8 @@ struct core_rx_gsi_offload_cqe { __le16 src_mac_addrlo; __le16 qp_id; __le32 src_qp; - __le32 reserved[3]; + struct core_rx_cqe_opaque_data opaque_data; + __le32 reserved; }; /* Core RX CQE for Light L2 */ @@ -405,7 +406,7 @@ struct ystorm_core_conn_st_ctx { /* The core storm context for the Pstorm */ struct pstorm_core_conn_st_ctx { - __le32 reserved[4]; + __le32 reserved[20]; }; /* Core Slowpath Connection storm context of Xstorm */ @@ -856,12 +857,12 @@ struct e4_ustorm_core_conn_ag_ctx { /* The core storm context for the Mstorm */ struct mstorm_core_conn_st_ctx { - __le32 reserved[24]; + __le32 reserved[40]; }; /* The core storm context for the Ustorm */ struct ustorm_core_conn_st_ctx { - __le32 reserved[4]; + __le32 reserved[20]; }; /* The core storm context for the Tstorm */ @@ -915,12 +916,21 @@ struct eth_pstorm_per_pf_stat { struct regpair sent_gre_bytes; struct regpair sent_vxlan_bytes; struct regpair sent_geneve_bytes; + struct regpair sent_mpls_bytes; + struct regpair sent_gre_mpls_bytes; + struct regpair sent_udp_mpls_bytes; struct regpair sent_gre_pkts; struct regpair sent_vxlan_pkts; struct regpair sent_geneve_pkts; + struct regpair sent_mpls_pkts; + struct regpair sent_gre_mpls_pkts; + struct regpair sent_udp_mpls_pkts; struct regpair gre_drop_pkts; struct regpair vxlan_drop_pkts; struct regpair geneve_drop_pkts; + struct regpair mpls_drop_pkts; + struct regpair gre_mpls_drop_pkts; + struct regpair udp_mpls_drop_pkts; }; /* Ethernet TX Per Queue Stats */ @@ -1010,7 +1020,8 @@ union event_ring_data { struct event_ring_entry { u8 protocol_id; u8 opcode; - __le16 reserved0; + u8 reserved0; + u8 vf_id; __le16 echo; u8 fw_return_code; u8 flags; @@ -1088,7 +1099,20 @@ enum malicious_vf_error_id { ETH_CONTROL_PACKET_VIOLATION, ETH_ANTI_SPOOFING_ERR, ETH_PACKET_SIZE_TOO_LARGE, - MAX_MALICIOUS_VF_ERROR_ID + CORE_ILLEGAL_VLAN_MODE, + CORE_ILLEGAL_NBDS, + CORE_FIRST_BD_WO_SOP, + CORE_INSUFFICIENT_BDS, + CORE_PACKET_TOO_SMALL, + CORE_ILLEGAL_INBAND_TAGS, + CORE_VLAN_INSERT_AND_INBAND_VLAN, + CORE_MTU_VIOLATION, + CORE_CONTROL_PACKET_VIOLATION, + CORE_ANTI_SPOOFING_ERR, + CORE_PACKET_SIZE_TOO_LARGE, + CORE_ILLEGAL_BD_FLAGS, + CORE_GSI_PACKET_VIOLATION, + MAX_MALICIOUS_VF_ERROR_ID, }; /* Mstorm non-triggering VF zone */ @@ -1394,6 +1418,16 @@ enum vf_zone_size_mode { MAX_VF_ZONE_SIZE_MODE }; +/* Xstorm non-triggering VF zone */ +struct xstorm_non_trigger_vf_zone { + struct regpair non_edpm_ack_pkts; +}; + +/* Tstorm VF zone */ +struct xstorm_vf_zone { + struct xstorm_non_trigger_vf_zone non_trigger; +}; + /* Attentions status block */ struct atten_status_block { __le32 atten_bits; @@ -2748,8 +2782,8 @@ enum chip_ids { }; struct fw_asserts_ram_section { - u16 section_ram_line_offset; - u16 section_ram_line_size; + __le16 section_ram_line_offset; + __le16 section_ram_line_size; u8 list_dword_offset; u8 list_element_dword_size; u8 list_num_elements; @@ -2799,6 +2833,7 @@ enum init_modes { MODE_PORTS_PER_ENG_4, MODE_100G, MODE_RESERVED6, + MODE_RESERVED7, MAX_INIT_MODES }; @@ -2833,6 +2868,7 @@ enum bin_init_buffer_type { BIN_BUF_INIT_VAL, BIN_BUF_INIT_MODE_TREE, BIN_BUF_INIT_IRO, + BIN_BUF_INIT_OVERLAYS, MAX_BIN_INIT_BUFFER_TYPE }; @@ -2929,10 +2965,8 @@ struct init_if_phase_op { u32 op_data; #define INIT_IF_PHASE_OP_OP_MASK 0xF #define INIT_IF_PHASE_OP_OP_SHIFT 0 -#define INIT_IF_PHASE_OP_DMAE_ENABLE_MASK 0x1 -#define INIT_IF_PHASE_OP_DMAE_ENABLE_SHIFT 4 -#define INIT_IF_PHASE_OP_RESERVED1_MASK 0x7FF -#define INIT_IF_PHASE_OP_RESERVED1_SHIFT 5 +#define INIT_IF_PHASE_OP_RESERVED1_MASK 0xFFF +#define INIT_IF_PHASE_OP_RESERVED1_SHIFT 4 #define INIT_IF_PHASE_OP_CMD_OFFSET_MASK 0xFFFF #define INIT_IF_PHASE_OP_CMD_OFFSET_SHIFT 16 u32 phase_data; @@ -4226,7 +4260,7 @@ void qed_gft_disable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u16 pf_id); /** * @brief qed_gft_config - Enable and configure HW for GFT * - * @param p_hwfn + * @param p_hwfn - HW device data * @param p_ptt - ptt window used for writing the registers. * @param pf_id - pf on which to enable GFT. * @param tcp - set profile tcp packets. @@ -5673,9 +5707,9 @@ struct e4_eth_conn_context { struct pstorm_eth_conn_st_ctx pstorm_st_context; struct xstorm_eth_conn_st_ctx xstorm_st_context; struct e4_xstorm_eth_conn_ag_ctx xstorm_ag_context; + struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context; struct ystorm_eth_conn_st_ctx ystorm_st_context; struct e4_ystorm_eth_conn_ag_ctx ystorm_ag_context; - struct e4_tstorm_eth_conn_ag_ctx tstorm_ag_context; struct e4_ustorm_eth_conn_ag_ctx ustorm_ag_context; struct ustorm_eth_conn_st_ctx ustorm_st_context; struct mstorm_eth_conn_st_ctx mstorm_st_context; @@ -5705,6 +5739,16 @@ enum eth_error_code { ETH_FILTERS_VNI_ADD_FAIL_FULL, ETH_FILTERS_VNI_ADD_FAIL_DUP, ETH_FILTERS_GFT_UPDATE_FAIL, + ETH_RX_QUEUE_FAIL_LOAD_VF_DATA, + ETH_FILTERS_GFS_ADD_FILTER_FAIL_MAX_HOPS, + ETH_FILTERS_GFS_ADD_FILTER_FAIL_NO_FREE_ENRTY, + ETH_FILTERS_GFS_ADD_FILTER_FAIL_ALREADY_EXISTS, + ETH_FILTERS_GFS_ADD_FILTER_FAIL_PCI_ERROR, + ETH_FILTERS_GFS_ADD_FINLER_FAIL_MAGIC_NUM_ERROR, + ETH_FILTERS_GFS_DEL_FILTER_FAIL_MAX_HOPS, + ETH_FILTERS_GFS_DEL_FILTER_FAIL_NO_MATCH_ENRTY, + ETH_FILTERS_GFS_DEL_FILTER_FAIL_PCI_ERROR, + ETH_FILTERS_GFS_DEL_FILTER_FAIL_MAGIC_NUM_ERROR, MAX_ETH_ERROR_CODE }; @@ -5728,6 +5772,11 @@ enum eth_event_opcode { ETH_EVENT_RX_CREATE_GFT_ACTION, ETH_EVENT_RX_GFT_UPDATE_FILTER, ETH_EVENT_TX_QUEUE_UPDATE, + ETH_EVENT_RGFS_ADD_FILTER, + ETH_EVENT_RGFS_DEL_FILTER, + ETH_EVENT_TGFS_ADD_FILTER, + ETH_EVENT_TGFS_DEL_FILTER, + ETH_EVENT_GFS_COUNTERS_REPORT_REQUEST, MAX_ETH_EVENT_OPCODE }; @@ -5820,18 +5869,31 @@ enum eth_ramrod_cmd_id { ETH_RAMROD_RX_CREATE_GFT_ACTION, ETH_RAMROD_GFT_UPDATE_FILTER, ETH_RAMROD_TX_QUEUE_UPDATE, + ETH_RAMROD_RGFS_FILTER_ADD, + ETH_RAMROD_RGFS_FILTER_DEL, + ETH_RAMROD_TGFS_FILTER_ADD, + ETH_RAMROD_TGFS_FILTER_DEL, + ETH_RAMROD_GFS_COUNTERS_REPORT_REQUEST, MAX_ETH_RAMROD_CMD_ID }; /* Return code from eth sp ramrods */ struct eth_return_code { u8 value; -#define ETH_RETURN_CODE_ERR_CODE_MASK 0x1F -#define ETH_RETURN_CODE_ERR_CODE_SHIFT 0 -#define ETH_RETURN_CODE_RESERVED_MASK 0x3 -#define ETH_RETURN_CODE_RESERVED_SHIFT 5 -#define ETH_RETURN_CODE_RX_TX_MASK 0x1 -#define ETH_RETURN_CODE_RX_TX_SHIFT 7 +#define ETH_RETURN_CODE_ERR_CODE_MASK 0x3F +#define ETH_RETURN_CODE_ERR_CODE_SHIFT 0 +#define ETH_RETURN_CODE_RESERVED_MASK 0x1 +#define ETH_RETURN_CODE_RESERVED_SHIFT 6 +#define ETH_RETURN_CODE_RX_TX_MASK 0x1 +#define ETH_RETURN_CODE_RX_TX_SHIFT 7 +}; + +/* tx destination enum */ +enum eth_tx_dst_mode_config_enum { + ETH_TX_DST_MODE_CONFIG_DISABLE, + ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_BD, + ETH_TX_DST_MODE_CONFIG_FORWARD_DATA_IN_VPORT, + MAX_ETH_TX_DST_MODE_CONFIG_ENUM }; /* What to do in case an error occurs */ @@ -5858,8 +5920,10 @@ struct eth_tx_err_vals { #define ETH_TX_ERR_VALS_MTU_VIOLATION_SHIFT 5 #define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_MASK 0x1 #define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_SHIFT 6 -#define ETH_TX_ERR_VALS_RESERVED_MASK 0x1FF -#define ETH_TX_ERR_VALS_RESERVED_SHIFT 7 +#define ETH_TX_ERR_VALS_ILLEGAL_BD_FLAGS_MASK 0x1 +#define ETH_TX_ERR_VALS_ILLEGAL_BD_FLAGS_SHIFT 7 +#define ETH_TX_ERR_VALS_RESERVED_MASK 0xFF +#define ETH_TX_ERR_VALS_RESERVED_SHIFT 8 }; /* vport rss configuration data */ @@ -5889,7 +5953,6 @@ struct eth_vport_rss_config { u8 tbl_size; __le32 reserved2[2]; __le16 indirection_table[ETH_RSS_IND_TABLE_ENTRIES_NUM]; - __le32 rss_key[ETH_RSS_KEY_SIZE_REGS]; __le32 reserved3[2]; }; @@ -6091,7 +6154,7 @@ struct rx_update_gft_filter_data { u8 inner_vlan_removal_en; }; -/* Ramrod data for rx queue start ramrod */ +/* Ramrod data for tx queue start ramrod */ struct tx_queue_start_ramrod_data { __le16 sb_id; u8 sb_index; @@ -6104,16 +6167,14 @@ struct tx_queue_start_ramrod_data { #define TX_QUEUE_START_RAMROD_DATA_DISABLE_OPPORTUNISTIC_SHIFT 0 #define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_MASK 0x1 #define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_SHIFT 1 -#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_MASK 0x1 -#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_SHIFT 2 #define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_MASK 0x1 -#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_SHIFT 3 +#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_SHIFT 2 #define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_MASK 0x1 -#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_SHIFT 4 +#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_SHIFT 3 #define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_MASK 0x1 -#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_SHIFT 5 -#define TX_QUEUE_START_RAMROD_DATA_RESERVED1_MASK 0x3 -#define TX_QUEUE_START_RAMROD_DATA_RESERVED1_SHIFT 6 +#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_SHIFT 4 +#define TX_QUEUE_START_RAMROD_DATA_RESERVED1_MASK 0x7 +#define TX_QUEUE_START_RAMROD_DATA_RESERVED1_SHIFT 5 u8 pxp_st_hint; u8 pxp_tph_valid_bd; u8 pxp_tph_valid_pkt; @@ -6169,18 +6230,22 @@ struct vport_start_ramrod_data { __le16 default_vlan; u8 tx_switching_en; u8 anti_spoofing_en; - u8 default_vlan_en; - u8 handle_ptp_pkts; u8 silent_vlan_removal_en; u8 untagged; struct eth_tx_err_vals tx_err_behav; - u8 zero_placement_offset; u8 ctl_frame_mac_check_en; u8 ctl_frame_ethtype_check_en; + u8 reserved0; + u8 reserved1; + u8 tx_dst_port_mode_config; + u8 dst_vport_id; + u8 tx_dst_port_mode; + u8 dst_vport_id_valid; u8 wipe_inner_vlan_pri_en; + u8 reserved2[2]; struct eth_in_to_in_pri_map_cfg in_to_in_vlan_pri_map_cfg; }; @@ -6740,19 +6805,6 @@ struct e4_xstorm_eth_hw_conn_ag_ctx { __le16 conn_dpi; }; -/* GFT CAM line struct */ -struct gft_cam_line { - __le32 camline; -#define GFT_CAM_LINE_VALID_MASK 0x1 -#define GFT_CAM_LINE_VALID_SHIFT 0 -#define GFT_CAM_LINE_DATA_MASK 0x3FFF -#define GFT_CAM_LINE_DATA_SHIFT 1 -#define GFT_CAM_LINE_MASK_BITS_MASK 0x3FFF -#define GFT_CAM_LINE_MASK_BITS_SHIFT 15 -#define GFT_CAM_LINE_RESERVED1_MASK 0x7 -#define GFT_CAM_LINE_RESERVED1_SHIFT 29 -}; - /* GFT CAM line struct with fields breakout */ struct gft_cam_line_mapped { __le32 camline; @@ -6782,10 +6834,6 @@ struct gft_cam_line_mapped { #define GFT_CAM_LINE_MAPPED_RESERVED1_SHIFT 29 }; -union gft_cam_line_union { - struct gft_cam_line cam_line; - struct gft_cam_line_mapped cam_line_mapped; -}; /* Used in gft_profile_key: Indication for ip version */ enum gft_profile_ip_version { @@ -7064,6 +7112,11 @@ struct mstorm_rdma_task_st_ctx { struct regpair temp[4]; }; +/* The roce task context of Ustorm */ +struct ustorm_rdma_task_st_ctx { + struct regpair temp[6]; +}; + struct e4_ustorm_rdma_task_ag_ctx { u8 reserved; u8 state; @@ -7073,8 +7126,8 @@ struct e4_ustorm_rdma_task_ag_ctx { #define E4_USTORM_RDMA_TASK_AG_CTX_CONNECTION_TYPE_SHIFT 0 #define E4_USTORM_RDMA_TASK_AG_CTX_EXIST_IN_QM0_MASK 0x1 #define E4_USTORM_RDMA_TASK_AG_CTX_EXIST_IN_QM0_SHIFT 4 -#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_RUNT_VALID_MASK 0x1 -#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_RUNT_VALID_SHIFT 5 +#define E4_USTORM_RDMA_TASK_AG_CTX_BIT1_MASK 0x1 +#define E4_USTORM_RDMA_TASK_AG_CTX_BIT1_SHIFT 5 #define E4_USTORM_RDMA_TASK_AG_CTX_DIF_WRITE_RESULT_CF_MASK 0x3 #define E4_USTORM_RDMA_TASK_AG_CTX_DIF_WRITE_RESULT_CF_SHIFT 6 u8 flags1; @@ -7104,29 +7157,29 @@ struct e4_ustorm_rdma_task_ag_ctx { #define E4_USTORM_RDMA_TASK_AG_CTX_RULE2EN_MASK 0x1 #define E4_USTORM_RDMA_TASK_AG_CTX_RULE2EN_SHIFT 7 u8 flags3; -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE3EN_MASK 0x1 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE3EN_SHIFT 0 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE4EN_MASK 0x1 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE4EN_SHIFT 1 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE5EN_MASK 0x1 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE5EN_SHIFT 2 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE6EN_MASK 0x1 -#define E4_USTORM_RDMA_TASK_AG_CTX_RULE6EN_SHIFT 3 -#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_ERROR_TYPE_MASK 0xF -#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_ERROR_TYPE_SHIFT 4 +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_RXMIT_PROD_CONS_EN_MASK 0x1 +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_RXMIT_PROD_CONS_EN_SHIFT 0 +#define E4_USTORM_RDMA_TASK_AG_CTX_RULE4EN_MASK 0x1 +#define E4_USTORM_RDMA_TASK_AG_CTX_RULE4EN_SHIFT 1 +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_WRITE_PROD_CONS_EN_MASK 0x1 +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_WRITE_PROD_CONS_EN_SHIFT 2 +#define E4_USTORM_RDMA_TASK_AG_CTX_RULE6EN_MASK 0x1 +#define E4_USTORM_RDMA_TASK_AG_CTX_RULE6EN_SHIFT 3 +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_ERROR_TYPE_MASK 0xF +#define E4_USTORM_RDMA_TASK_AG_CTX_DIF_ERROR_TYPE_SHIFT 4 __le32 dif_err_intervals; __le32 dif_error_1st_interval; - __le32 sq_cons; - __le32 dif_runt_value; + __le32 dif_rxmit_cons; + __le32 dif_rxmit_prod; __le32 sge_index; - __le32 reg5; + __le32 sq_cons; u8 byte2; u8 byte3; - __le16 word1; - __le16 word2; + __le16 dif_write_cons; + __le16 dif_write_prod; __le16 word3; - __le32 reg6; - __le32 reg7; + __le32 dif_error_buffer_address_lo; + __le32 dif_error_buffer_address_hi; }; /* RDMA task context */ @@ -7137,6 +7190,8 @@ struct e4_rdma_task_context { struct e4_mstorm_rdma_task_ag_ctx mstorm_ag_context; struct mstorm_rdma_task_st_ctx mstorm_st_context; struct rdif_task_context rdif_context; + struct ustorm_rdma_task_st_ctx ustorm_st_context; + struct regpair ustorm_st_padding[2]; struct e4_ustorm_rdma_task_ag_ctx ustorm_ag_context; }; @@ -7172,7 +7227,12 @@ struct rdma_create_cq_ramrod_data { u8 pbl_log_page_size; u8 toggle_bit; __le16 int_timeout; - __le16 reserved1; + u8 vf_id; + u8 flags; +#define RDMA_CREATE_CQ_RAMROD_DATA_VF_ID_VALID_MASK 0x1 +#define RDMA_CREATE_CQ_RAMROD_DATA_VF_ID_VALID_SHIFT 0 +#define RDMA_CREATE_CQ_RAMROD_DATA_RESERVED1_MASK 0x7F +#define RDMA_CREATE_CQ_RAMROD_DATA_RESERVED1_SHIFT 1 }; /* rdma deregister tid ramrod data */ @@ -7216,6 +7276,7 @@ enum rdma_fw_return_code { RDMA_RETURN_DEREGISTER_MR_BAD_STATE_ERR, RDMA_RETURN_RESIZE_CQ_ERR, RDMA_RETURN_NIG_DRAIN_REQ, + RDMA_RETURN_GENERAL_ERR, MAX_RDMA_FW_RETURN_CODE }; @@ -7229,7 +7290,10 @@ struct rdma_init_func_hdr { u8 relaxed_ordering; __le16 first_reg_srq_id; __le32 reg_srq_base_addr; - __le32 reserved; + u8 searcher_mode; + u8 pvrdma_mode; + u8 max_num_ns_log; + u8 reserved; }; /* rdma function init ramrod data */ @@ -7319,16 +7383,20 @@ struct rdma_resize_cq_ramrod_data { #define RDMA_RESIZE_CQ_RAMROD_DATA_TOGGLE_BIT_SHIFT 0 #define RDMA_RESIZE_CQ_RAMROD_DATA_IS_TWO_LEVEL_PBL_MASK 0x1 #define RDMA_RESIZE_CQ_RAMROD_DATA_IS_TWO_LEVEL_PBL_SHIFT 1 -#define RDMA_RESIZE_CQ_RAMROD_DATA_RESERVED_MASK 0x3F -#define RDMA_RESIZE_CQ_RAMROD_DATA_RESERVED_SHIFT 2 +#define RDMA_RESIZE_CQ_RAMROD_DATA_VF_ID_VALID_MASK 0x1 +#define RDMA_RESIZE_CQ_RAMROD_DATA_VF_ID_VALID_SHIFT 2 +#define RDMA_RESIZE_CQ_RAMROD_DATA_RESERVED_MASK 0x1F +#define RDMA_RESIZE_CQ_RAMROD_DATA_RESERVED_SHIFT 3 u8 pbl_log_page_size; __le16 pbl_num_pages; __le32 max_cqes; struct regpair pbl_addr; struct regpair output_params_addr; + u8 vf_id; + u8 reserved1[7]; }; -/* The rdma storm context of Mstorm */ +/* The rdma SRQ context */ struct rdma_srq_context { struct regpair temp[8]; }; @@ -7375,6 +7443,7 @@ enum rdma_tid_type { MAX_RDMA_TID_TYPE }; +/* The rdma XRC SRQ context */ struct rdma_xrc_srq_context { struct regpair temp[9]; }; @@ -7556,12 +7625,12 @@ struct e4_xstorm_roce_conn_ag_ctx { #define E4_XSTORM_ROCE_CONN_AG_CTX_BIT10_SHIFT 2 #define E4_XSTORM_ROCE_CONN_AG_CTX_BIT11_MASK 0x1 #define E4_XSTORM_ROCE_CONN_AG_CTX_BIT11_SHIFT 3 -#define E4_XSTORM_ROCE_CONN_AG_CTX_BIT12_MASK 0x1 -#define E4_XSTORM_ROCE_CONN_AG_CTX_BIT12_SHIFT 4 +#define E4_XSTORM_ROCE_CONN_AG_CTX_MSDM_FLUSH_MASK 0x1 +#define E4_XSTORM_ROCE_CONN_AG_CTX_MSDM_FLUSH_SHIFT 4 #define E4_XSTORM_ROCE_CONN_AG_CTX_MSEM_FLUSH_MASK 0x1 #define E4_XSTORM_ROCE_CONN_AG_CTX_MSEM_FLUSH_SHIFT 5 -#define E4_XSTORM_ROCE_CONN_AG_CTX_MSDM_FLUSH_MASK 0x1 -#define E4_XSTORM_ROCE_CONN_AG_CTX_MSDM_FLUSH_SHIFT 6 +#define E4_XSTORM_ROCE_CONN_AG_CTX_BIT14_MASK 0x1 +#define E4_XSTORM_ROCE_CONN_AG_CTX_BIT14_SHIFT 6 #define E4_XSTORM_ROCE_CONN_AG_CTX_YSTORM_FLUSH_MASK 0x1 #define E4_XSTORM_ROCE_CONN_AG_CTX_YSTORM_FLUSH_SHIFT 7 u8 flags2; @@ -7885,9 +7954,9 @@ struct mstorm_roce_conn_st_ctx { struct regpair temp[6]; }; -/* The roce storm context of Ystorm */ +/* The roce storm context of Ustorm */ struct ustorm_roce_conn_st_ctx { - struct regpair temp[12]; + struct regpair temp[14]; }; /* roce connection context */ @@ -7905,6 +7974,7 @@ struct e4_roce_conn_context { struct mstorm_roce_conn_st_ctx mstorm_st_context; struct regpair mstorm_st_padding[2]; struct ustorm_roce_conn_st_ctx ustorm_st_context; + struct regpair ustorm_st_padding[2]; }; /* roce cqes statistics */ @@ -7959,12 +8029,17 @@ struct roce_create_qp_req_ramrod_data { struct regpair qp_handle_for_cqe; struct regpair qp_handle_for_async; u8 stats_counter_id; - u8 reserved3[6]; + u8 vf_id; + u8 vport_id; u8 flags2; #define ROCE_CREATE_QP_REQ_RAMROD_DATA_EDPM_MODE_MASK 0x1 #define ROCE_CREATE_QP_REQ_RAMROD_DATA_EDPM_MODE_SHIFT 0 -#define ROCE_CREATE_QP_REQ_RAMROD_DATA_RESERVED_MASK 0x7F -#define ROCE_CREATE_QP_REQ_RAMROD_DATA_RESERVED_SHIFT 1 +#define ROCE_CREATE_QP_REQ_RAMROD_DATA_VF_ID_VALID_MASK 0x1 +#define ROCE_CREATE_QP_REQ_RAMROD_DATA_VF_ID_VALID_SHIFT 1 +#define ROCE_CREATE_QP_REQ_RAMROD_DATA_RESERVED_MASK 0x3F +#define ROCE_CREATE_QP_REQ_RAMROD_DATA_RESERVED_SHIFT 2 + u8 name_space; + u8 reserved3[3]; __le16 regular_latency_phy_queue; __le16 dpi; }; @@ -7992,8 +8067,10 @@ struct roce_create_qp_resp_ramrod_data { #define ROCE_CREATE_QP_RESP_RAMROD_DATA_MIN_RNR_NAK_TIMER_SHIFT 11 #define ROCE_CREATE_QP_RESP_RAMROD_DATA_XRC_FLAG_MASK 0x1 #define ROCE_CREATE_QP_RESP_RAMROD_DATA_XRC_FLAG_SHIFT 16 -#define ROCE_CREATE_QP_RESP_RAMROD_DATA_RESERVED_MASK 0x7FFF -#define ROCE_CREATE_QP_RESP_RAMROD_DATA_RESERVED_SHIFT 17 +#define ROCE_CREATE_QP_RESP_RAMROD_DATA_VF_ID_VALID_MASK 0x1 +#define ROCE_CREATE_QP_RESP_RAMROD_DATA_VF_ID_VALID_SHIFT 17 +#define ROCE_CREATE_QP_RESP_RAMROD_DATA_RESERVED_MASK 0x3FFF +#define ROCE_CREATE_QP_RESP_RAMROD_DATA_RESERVED_SHIFT 18 __le16 xrc_domain; u8 max_ird; u8 traffic_class; @@ -8020,10 +8097,14 @@ struct roce_create_qp_resp_ramrod_data { struct regpair qp_handle_for_cqe; struct regpair qp_handle_for_async; __le16 low_latency_phy_queue; - u8 reserved2[2]; + u8 vf_id; + u8 vport_id; __le32 cq_cid; __le16 regular_latency_phy_queue; __le16 dpi; + __le32 src_qp_id; + u8 name_space; + u8 reserved3[3]; }; /* roce DCQCN received statistics */ @@ -8057,6 +8138,8 @@ struct roce_destroy_qp_resp_output_params { /* RoCE destroy qp responder ramrod data */ struct roce_destroy_qp_resp_ramrod_data { struct regpair output_params_addr; + __le32 src_qp_id; + __le32 reserved; }; /* roce error statistics */ @@ -8140,8 +8223,8 @@ struct roce_modify_qp_req_ramrod_data { #define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PRI_FLG_SHIFT 9 #define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PRI_MASK 0x7 #define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PRI_SHIFT 10 -#define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PHYSICAL_QUEUES_FLG_MASK 0x1 -#define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PHYSICAL_QUEUES_FLG_SHIFT 13 +#define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PHYSICAL_QUEUE_FLG_MASK 0x1 +#define ROCE_MODIFY_QP_REQ_RAMROD_DATA_PHYSICAL_QUEUE_FLG_SHIFT 13 #define ROCE_MODIFY_QP_REQ_RAMROD_DATA_RESERVED1_MASK 0x3 #define ROCE_MODIFY_QP_REQ_RAMROD_DATA_RESERVED1_SHIFT 14 u8 fields; @@ -8187,8 +8270,8 @@ struct roce_modify_qp_resp_ramrod_data { #define ROCE_MODIFY_QP_RESP_RAMROD_DATA_MIN_RNR_NAK_TIMER_FLG_SHIFT 8 #define ROCE_MODIFY_QP_RESP_RAMROD_DATA_RDMA_OPS_EN_FLG_MASK 0x1 #define ROCE_MODIFY_QP_RESP_RAMROD_DATA_RDMA_OPS_EN_FLG_SHIFT 9 -#define ROCE_MODIFY_QP_RESP_RAMROD_DATA_PHYSICAL_QUEUES_FLG_MASK 0x1 -#define ROCE_MODIFY_QP_RESP_RAMROD_DATA_PHYSICAL_QUEUES_FLG_SHIFT 10 +#define ROCE_MODIFY_QP_RESP_RAMROD_DATA_PHYSICAL_QUEUE_FLG_MASK 0x1 +#define ROCE_MODIFY_QP_RESP_RAMROD_DATA_PHYSICAL_QUEUE_FLG_SHIFT 10 #define ROCE_MODIFY_QP_RESP_RAMROD_DATA_RESERVED1_MASK 0x1F #define ROCE_MODIFY_QP_RESP_RAMROD_DATA_RESERVED1_SHIFT 11 u8 fields; @@ -8229,7 +8312,7 @@ struct roce_query_qp_req_ramrod_data { /* RoCE query qp responder output params */ struct roce_query_qp_resp_output_params { __le32 psn; - __le32 err_flag; + __le32 flags; #define ROCE_QUERY_QP_RESP_OUTPUT_PARAMS_ERROR_FLG_MASK 0x1 #define ROCE_QUERY_QP_RESP_OUTPUT_PARAMS_ERROR_FLG_SHIFT 0 #define ROCE_QUERY_QP_RESP_OUTPUT_PARAMS_RESERVED0_MASK 0x7FFFFFFF @@ -8296,12 +8379,12 @@ struct e4_xstorm_roce_conn_ag_ctx_dq_ext_ld_part { #define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT10_SHIFT 2 #define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT11_MASK 0x1 #define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT11_SHIFT 3 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT12_MASK 0x1 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT12_SHIFT 4 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSEM_FLUSH_MASK 0x1 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSEM_FLUSH_SHIFT 5 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSDM_FLUSH_MASK 0x1 -#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSDM_FLUSH_SHIFT 6 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSDM_FLUSH_MASK 0x1 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSDM_FLUSH_SHIFT 4 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSEM_FLUSH_MASK 0x1 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_MSEM_FLUSH_SHIFT 5 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT14_MASK 0x1 +#define E4XSTORMROCECONNAGCTXDQEXTLDPART_BIT14_SHIFT 6 #define E4XSTORMROCECONNAGCTXDQEXTLDPART_YSTORM_FLUSH_MASK 0x1 #define E4XSTORMROCECONNAGCTXDQEXTLDPART_YSTORM_FLUSH_SHIFT 7 u8 flags2; @@ -8674,8 +8757,8 @@ struct e4_tstorm_roce_req_conn_ag_ctx { u8 flags5; #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE1EN_MASK 0x1 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE1EN_SHIFT 0 -#define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE2EN_MASK 0x1 -#define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE2EN_SHIFT 1 +#define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_DIF_CNT_EN_MASK 0x1 +#define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_DIF_CNT_EN_SHIFT 1 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE3EN_MASK 0x1 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE3EN_SHIFT 2 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE4EN_MASK 0x1 @@ -8688,13 +8771,13 @@ struct e4_tstorm_roce_req_conn_ag_ctx { #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE7EN_SHIFT 6 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE8EN_MASK 0x1 #define E4_TSTORM_ROCE_REQ_CONN_AG_CTX_RULE8EN_SHIFT 7 - __le32 reg0; + __le32 dif_rxmit_cnt; __le32 snd_nxt_psn; __le32 snd_max_psn; __le32 orq_prod; __le32 reg4; - __le32 reg5; - __le32 reg6; + __le32 dif_acked_cnt; + __le32 dif_cnt; __le32 reg7; __le32 reg8; u8 tx_cqe_error_type; @@ -8705,7 +8788,7 @@ struct e4_tstorm_roce_req_conn_ag_ctx { __le16 snd_sq_cons; __le16 conn_dpi; __le16 force_comp_cons; - __le32 reg9; + __le32 dif_rxmit_acked_cnt; __le32 reg10; }; @@ -8980,10 +9063,10 @@ struct e4_xstorm_roce_req_conn_ag_ctx { #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT10_SHIFT 2 #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT11_MASK 0x1 #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT11_SHIFT 3 -#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT12_MASK 0x1 -#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT12_SHIFT 4 -#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT13_MASK 0x1 -#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_BIT13_SHIFT 5 +#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_MSDM_FLUSH_MASK 0x1 +#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_MSDM_FLUSH_SHIFT 4 +#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_MSEM_FLUSH_MASK 0x1 +#define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_MSEM_FLUSH_SHIFT 5 #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_ERROR_STATE_MASK 0x1 #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_ERROR_STATE_SHIFT 6 #define E4_XSTORM_ROCE_REQ_CONN_AG_CTX_YSTORM_FLUSH_MASK 0x1 @@ -9209,10 +9292,10 @@ struct e4_xstorm_roce_resp_conn_ag_ctx { #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT10_SHIFT 2 #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT11_MASK 0x1 #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT11_SHIFT 3 -#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT12_MASK 0x1 -#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT12_SHIFT 4 -#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT13_MASK 0x1 -#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_BIT13_SHIFT 5 +#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_MSDM_FLUSH_MASK 0x1 +#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_MSDM_FLUSH_SHIFT 4 +#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_MSEM_FLUSH_MASK 0x1 +#define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_MSEM_FLUSH_SHIFT 5 #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_ERROR_STATE_MASK 0x1 #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_ERROR_STATE_SHIFT 6 #define E4_XSTORM_ROCE_RESP_CONN_AG_CTX_YSTORM_FLUSH_MASK 0x1 @@ -9939,7 +10022,7 @@ struct mstorm_iwarp_conn_st_ctx { /* The iwarp storm context of Ustorm */ struct ustorm_iwarp_conn_st_ctx { - __le32 reserved[24]; + struct regpair reserved[14]; }; /* iwarp connection context */ @@ -9957,6 +10040,7 @@ struct e4_iwarp_conn_context { struct regpair tstorm_st_padding[2]; struct mstorm_iwarp_conn_st_ctx mstorm_st_context; struct ustorm_iwarp_conn_st_ctx ustorm_st_context; + struct regpair ustorm_st_padding[2]; }; /* iWARP create QP params passed by driver to FW in CreateQP Request Ramrod */ @@ -10009,7 +10093,8 @@ enum iwarp_eqe_async_opcode { struct iwarp_eqe_data_mpa_async_completion { __le16 ulp_data_len; - u8 reserved[6]; + u8 rtr_type_sent; + u8 reserved[5]; }; struct iwarp_eqe_data_tcp_async_completion { @@ -10034,7 +10119,7 @@ enum iwarp_eqe_sync_opcode { /* iWARP EQE completion status */ enum iwarp_fw_return_code { - IWARP_CONN_ERROR_TCP_CONNECT_INVALID_PACKET = 5, + IWARP_CONN_ERROR_TCP_CONNECT_INVALID_PACKET = 6, IWARP_CONN_ERROR_TCP_CONNECTION_RST, IWARP_CONN_ERROR_TCP_CONNECT_TIMEOUT, IWARP_CONN_ERROR_MPA_ERROR_REJECT, @@ -10203,8 +10288,8 @@ struct iwarp_rxmit_stats_drv { * offload ramrod. */ struct iwarp_tcp_offload_ramrod_data { - struct iwarp_offload_params iwarp; struct tcp_offload_params_opt2 tcp; + struct iwarp_offload_params iwarp; }; /* iWARP MPA negotiation types */ diff --git a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c index 31222b3e3936..5ca117a8734a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c +++ b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c @@ -490,10 +490,10 @@ static u32 qed_init_cmd_phase(struct qed_hwfn *p_hwfn, int qed_init_run(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, int phase, int phase_id, int modes) { + bool b_dmae = (phase != PHASE_ENGINE); struct qed_dev *cdev = p_hwfn->cdev; u32 cmd_num, num_init_ops; union init_op *init_ops; - bool b_dmae = false; int rc = 0; num_init_ops = cdev->fw_data->init_ops_size; @@ -522,7 +522,6 @@ int qed_init_run(struct qed_hwfn *p_hwfn, case INIT_OP_IF_PHASE: cmd_num += qed_init_cmd_phase(p_hwfn, &cmd->if_phase, phase, phase_id); - b_dmae = GET_FIELD(data, INIT_IF_PHASE_OP_DMAE_ENABLE); break; case INIT_OP_DELAY: /* qed_init_run is always invoked from @@ -533,6 +532,9 @@ int qed_init_run(struct qed_hwfn *p_hwfn, case INIT_OP_CALLBACK: rc = qed_init_cmd_cb(p_hwfn, p_ptt, &cmd->callback); + if (phase == PHASE_ENGINE && + cmd->callback.callback_id == DMAE_READY_CB) + b_dmae = true; break; } diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c index aeac359941de..af981ff5595e 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c @@ -48,6 +48,8 @@ #include "qed_reg_addr.h" #include "qed_sriov.h" +#define GRCBASE_MCP 0xe00000 + #define QED_MCP_RESP_ITER_US 10 #define QED_DRV_MB_MAX_RETRIES (500 * 1000) /* Account for 5 sec */ diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c index e49fada85410..37e70562a964 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_roce.c +++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c @@ -900,7 +900,7 @@ int qed_roce_query_qp(struct qed_hwfn *p_hwfn, goto err_resp; out_params->rq_psn = le32_to_cpu(p_resp_ramrod_res->psn); - rq_err_state = GET_FIELD(le32_to_cpu(p_resp_ramrod_res->err_flag), + rq_err_state = GET_FIELD(le32_to_cpu(p_resp_ramrod_res->flags), ROCE_QUERY_QP_RESP_OUTPUT_PARAMS_ERROR_FLG); dma_free_coherent(&p_hwfn->cdev->pdev->dev, sizeof(*p_resp_ramrod_res), diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index 004c0bfec41d..c6c20776b474 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -848,13 +848,13 @@ static void qede_tpa_start(struct qede_dev *edev, qede_set_gro_params(edev, tpa_info->skb, cqe); cons_buf: /* We still need to handle bd_len_list to consume buffers */ - if (likely(cqe->ext_bd_len_list[0])) + if (likely(cqe->bw_ext_bd_len_list[0])) qede_fill_frag_skb(edev, rxq, cqe->tpa_agg_index, - le16_to_cpu(cqe->ext_bd_len_list[0])); + le16_to_cpu(cqe->bw_ext_bd_len_list[0])); - if (unlikely(cqe->ext_bd_len_list[1])) { + if (unlikely(cqe->bw_ext_bd_len_list[1])) { DP_ERR(edev, - "Unlikely - got a TPA aggregation with more than one ext_bd_len_list entry in the TPA start\n"); + "Unlikely - got a TPA aggregation with more than one bw_ext_bd_len_list entry in the TPA start\n"); tpa_info->state = QEDE_AGG_STATE_ERROR; } } diff --git a/include/linux/qed/common_hsi.h b/include/linux/qed/common_hsi.h index 718ce72e5965..2c4737e6694a 100644 --- a/include/linux/qed/common_hsi.h +++ b/include/linux/qed/common_hsi.h @@ -76,7 +76,6 @@ #define FW_ASSERT_GENERAL_ATTN_IDX 32 -#define MAX_PINNED_CCFC 32 /* Queue Zone sizes in bytes */ #define TSTORM_QZONE_SIZE 8 @@ -139,10 +138,10 @@ #define MAX_NUM_VFS (MAX_NUM_VFS_K2) #define MAX_NUM_FUNCTIONS_BB (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB) -#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS) #define MAX_FUNCTION_NUMBER_BB (MAX_NUM_PFS + MAX_NUM_VFS_BB) -#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS) +#define MAX_FUNCTION_NUMBER_K2 (MAX_NUM_PFS + MAX_NUM_VFS_K2) +#define MAX_NUM_FUNCTIONS (MAX_FUNCTION_NUMBER_K2) #define MAX_NUM_VPORTS_K2 (208) #define MAX_NUM_VPORTS_BB (160) @@ -229,6 +228,7 @@ #define DQ_XCM_TOE_TX_BD_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 #define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG3 #define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG4 +#define DQ_XCM_ROCE_ACK_EDPM_DORQ_SEQ_CMD DQ_XCM_AGG_VAL_SEL_WORD5 /* UCM agg val selection (HW) */ #define DQ_UCM_AGG_VAL_SEL_WORD0 0 @@ -406,6 +406,7 @@ /* Number of Protocol Indices per Status Block */ #define PIS_PER_SB_E4 12 +#define MAX_PIS_PER_SB PIS_PER_SB #define CAU_HC_STOPPED_STATE 3 #define CAU_HC_DISABLE_STATE 4 @@ -436,8 +437,6 @@ #define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff #define IGU_CMD_INT_ACK_BASE 0x0400 -#define IGU_CMD_INT_ACK_UPPER (IGU_CMD_INT_ACK_BASE + \ - MAX_TOT_SB_PER_PATH - 1) #define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff #define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0 @@ -450,8 +449,6 @@ #define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6 #define IGU_CMD_PROD_UPD_BASE 0x0600 -#define IGU_CMD_PROD_UPD_UPPER (IGU_CMD_PROD_UPD_BASE +\ - MAX_TOT_SB_PER_PATH - 1) #define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff /*****************/ @@ -741,6 +738,8 @@ enum protocol_type { PROTOCOLID_PREROCE, PROTOCOLID_COMMON, PROTOCOLID_RESERVED1, + PROTOCOLID_RDMA, + PROTOCOLID_SCSI, MAX_PROTOCOL_TYPE }; @@ -761,6 +760,10 @@ union rdma_eqe_data { struct rdma_eqe_destroy_qp rdma_destroy_qp_data; }; +struct tstorm_queue_zone { + __le32 reserved[2]; +}; + /* Ustorm Queue Zone */ struct ustorm_eth_queue_zone { struct coalescing_timeset int_coalescing_timeset; @@ -883,8 +886,8 @@ struct db_l2_dpm_data { #define DB_L2_DPM_DATA_RESERVED0_SHIFT 27 #define DB_L2_DPM_DATA_SGE_NUM_MASK 0x7 #define DB_L2_DPM_DATA_SGE_NUM_SHIFT 28 -#define DB_L2_DPM_DATA_GFS_SRC_EN_MASK 0x1 -#define DB_L2_DPM_DATA_GFS_SRC_EN_SHIFT 31 +#define DB_L2_DPM_DATA_TGFS_SRC_EN_MASK 0x1 +#define DB_L2_DPM_DATA_TGFS_SRC_EN_SHIFT 31 }; /* Structure for SGE in a DPM doorbell of type DPM_L2_BD */ diff --git a/include/linux/qed/eth_common.h b/include/linux/qed/eth_common.h index d9416ad5ef59..95f5fd615852 100644 --- a/include/linux/qed/eth_common.h +++ b/include/linux/qed/eth_common.h @@ -38,9 +38,11 @@ /********************/ #define ETH_HSI_VER_MAJOR 3 -#define ETH_HSI_VER_MINOR 10 +#define ETH_HSI_VER_MINOR 11 -#define ETH_HSI_VER_NO_PKT_LEN_TUNN 5 +#define ETH_HSI_VER_NO_PKT_LEN_TUNN 5 +/* Maximum number of pinned L2 connections (CIDs) */ +#define ETH_PINNED_CONN_MAX_NUM 32 #define ETH_CACHE_LINE_SIZE 64 #define ETH_RX_CQE_GAP 32 @@ -61,6 +63,7 @@ #define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT 3 #define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT 2 #define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE 2 +#define ETH_TX_MIN_BDS_PER_PKT_W_VPORT_FORWARDING 4 #define ETH_TX_MAX_NON_LSO_PKT_LEN (9700 - (4 + 4 + 12 + 8)) #define ETH_TX_MAX_LSO_HDR_BYTES 510 #define ETH_TX_LSO_WINDOW_BDS_NUM (18 - 1) @@ -75,9 +78,8 @@ #define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE \ (ETH_NUM_STATISTIC_COUNTERS - 3 * MAX_NUM_VFS / 4) -/* Maximum number of buffers, used for RX packet placement */ #define ETH_RX_MAX_BUFF_PER_PKT 5 -#define ETH_RX_BD_THRESHOLD 12 +#define ETH_RX_BD_THRESHOLD 16 /* Num of MAC/VLAN filters */ #define ETH_NUM_MAC_FILTERS 512 @@ -96,24 +98,24 @@ #define ETH_RSS_ENGINE_NUM_BB 127 /* TPA constants */ -#define ETH_TPA_MAX_AGGS_NUM 64 -#define ETH_TPA_CQE_START_LEN_LIST_SIZE ETH_RX_MAX_BUFF_PER_PKT -#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE 6 -#define ETH_TPA_CQE_END_LEN_LIST_SIZE 4 +#define ETH_TPA_MAX_AGGS_NUM 64 +#define ETH_TPA_CQE_START_BW_LEN_LIST_SIZE 2 +#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE 6 +#define ETH_TPA_CQE_END_LEN_LIST_SIZE 4 /* Control frame check constants */ -#define ETH_CTL_FRAME_ETH_TYPE_NUM 4 +#define ETH_CTL_FRAME_ETH_TYPE_NUM 4 /* GFS constants */ #define ETH_GFT_TRASHCAN_VPORT 0x1FF /* GFT drop flow vport number */ /* Destination port mode */ -enum dest_port_mode { - DEST_PORT_PHY, - DEST_PORT_LOOPBACK, - DEST_PORT_PHY_LOOPBACK, - DEST_PORT_DROP, - MAX_DEST_PORT_MODE +enum dst_port_mode { + DST_PORT_PHY, + DST_PORT_LOOPBACK, + DST_PORT_PHY_LOOPBACK, + DST_PORT_DROP, + MAX_DST_PORT_MODE }; /* Ethernet address type */ @@ -167,8 +169,8 @@ struct eth_tx_data_2nd_bd { #define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_SHIFT 0 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_MASK 0x3 #define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_SHIFT 4 -#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_MASK 0x3 -#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_SHIFT 6 +#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_MASK 0x3 +#define ETH_TX_DATA_2ND_BD_DST_PORT_MODE_SHIFT 6 #define ETH_TX_DATA_2ND_BD_START_BD_MASK 0x1 #define ETH_TX_DATA_2ND_BD_START_BD_SHIFT 8 #define ETH_TX_DATA_2ND_BD_TUNN_TYPE_MASK 0x3 @@ -244,8 +246,9 @@ struct eth_fast_path_rx_reg_cqe { struct eth_tunnel_parsing_flags tunnel_pars_flags; u8 bd_num; u8 reserved; - __le16 flow_id; - u8 reserved1[11]; + __le16 reserved2; + __le32 flow_id_or_resource_id; + u8 reserved1[7]; struct eth_pmd_flow_flags pmd_flags; }; @@ -296,9 +299,10 @@ struct eth_fast_path_rx_tpa_start_cqe { struct eth_tunnel_parsing_flags tunnel_pars_flags; u8 tpa_agg_index; u8 header_len; - __le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE]; - __le16 flow_id; - u8 reserved; + __le16 bw_ext_bd_len_list[ETH_TPA_CQE_START_BW_LEN_LIST_SIZE]; + __le16 reserved2; + __le32 flow_id_or_resource_id; + u8 reserved[3]; struct eth_pmd_flow_flags pmd_flags; }; @@ -407,6 +411,29 @@ struct eth_tx_3rd_bd { struct eth_tx_data_3rd_bd data; }; +/* The parsing information data for the forth tx bd of a given packet. */ +struct eth_tx_data_4th_bd { + u8 dst_vport_id; + u8 reserved4; + __le16 bitfields; +#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_MASK 0x1 +#define ETH_TX_DATA_4TH_BD_DST_VPORT_ID_VALID_SHIFT 0 +#define ETH_TX_DATA_4TH_BD_RESERVED1_MASK 0x7F +#define ETH_TX_DATA_4TH_BD_RESERVED1_SHIFT 1 +#define ETH_TX_DATA_4TH_BD_START_BD_MASK 0x1 +#define ETH_TX_DATA_4TH_BD_START_BD_SHIFT 8 +#define ETH_TX_DATA_4TH_BD_RESERVED2_MASK 0x7F +#define ETH_TX_DATA_4TH_BD_RESERVED2_SHIFT 9 + __le16 reserved3; +}; + +/* The forth tx bd of a given packet */ +struct eth_tx_4th_bd { + struct regpair addr; /* Single continuous buffer */ + __le16 nbytes; /* Number of bytes in this BD */ + struct eth_tx_data_4th_bd data; /* Parsing information data */ +}; + /* Complementary information for the regular tx bd of a given packet */ struct eth_tx_data_bd { __le16 reserved0; @@ -431,6 +458,7 @@ union eth_tx_bd_types { struct eth_tx_1st_bd first_bd; struct eth_tx_2nd_bd second_bd; struct eth_tx_3rd_bd third_bd; + struct eth_tx_4th_bd fourth_bd; struct eth_tx_bd reg_bd; }; @@ -443,6 +471,12 @@ enum eth_tx_tunn_type { MAX_ETH_TX_TUNN_TYPE }; +/* Mstorm Queue Zone */ +struct mstorm_eth_queue_zone { + struct eth_rx_prod_data rx_producers; + __le32 reserved[3]; +}; + /* Ystorm Queue Zone */ struct xstorm_eth_queue_zone { struct coalescing_timeset int_coalescing_timeset; -- cgit v1.2.3-71-gd317 From 8a52bbab39c9791480cbae86c69ad0d47f62972e Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:17 +0200 Subject: qed: Debug feature: ilt and mdump Part of the FW drop includes new debug capabilities implemented in the qed_debug file. This patch dumps additional information during ethtool -d for better debugging. The data dumped is the ilt (internal logical table) and information gathered by the management firmware incase there was a crash and driver was not able to extract the information (mdump). Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed.h | 1 + drivers/net/ethernet/qlogic/qed/qed_cxt.c | 357 +++++++++---------- drivers/net/ethernet/qlogic/qed/qed_cxt.h | 130 +++++++ drivers/net/ethernet/qlogic/qed/qed_debug.c | 521 +++++++++++++++++++++++++++- drivers/net/ethernet/qlogic/qed/qed_debug.h | 4 + drivers/net/ethernet/qlogic/qed/qed_hsi.h | 18 + drivers/net/ethernet/qlogic/qed/qed_mcp.c | 3 + include/linux/qed/qed_if.h | 1 + 8 files changed, 821 insertions(+), 214 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index fedb1ab0b223..a6dc40681a32 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -877,6 +877,7 @@ struct qed_dev { struct qed_cb_ll2_info *ll2; u8 ll2_mac_address[ETH_ALEN]; #endif + bool disable_ilt_dump; DECLARE_HASHTABLE(connections, 10); const struct firmware *firmware; diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c index 5ebafbc379db..fbfff2b1dc93 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c +++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c @@ -50,12 +50,6 @@ #include "qed_reg_addr.h" #include "qed_sriov.h" -/* Max number of connection types in HW (DQ/CDU etc.) */ -#define MAX_CONN_TYPES PROTOCOLID_COMMON -#define NUM_TASK_TYPES 2 -#define NUM_TASK_PF_SEGMENTS 4 -#define NUM_TASK_VF_SEGMENTS 1 - /* QM constants */ #define QM_PQ_ELEMENT_SIZE 4 /* in bytes */ @@ -123,126 +117,6 @@ struct src_ent { /* Alignment is inherent to the type1_task_context structure */ #define TYPE1_TASK_CXT_SIZE(p_hwfn) sizeof(union type1_task_context) -/* PF per protocl configuration object */ -#define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS) -#define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS) - -struct qed_tid_seg { - u32 count; - u8 type; - bool has_fl_mem; -}; - -struct qed_conn_type_cfg { - u32 cid_count; - u32 cids_per_vf; - struct qed_tid_seg tid_seg[TASK_SEGMENTS]; -}; - -/* ILT Client configuration, Per connection type (protocol) resources. */ -#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2) -#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2) -#define CDUC_BLK (0) -#define SRQ_BLK (0) -#define CDUT_SEG_BLK(n) (1 + (u8)(n)) -#define CDUT_FL_SEG_BLK(n, X) (1 + (n) + NUM_TASK_ ## X ## _SEGMENTS) - -enum ilt_clients { - ILT_CLI_CDUC, - ILT_CLI_CDUT, - ILT_CLI_QM, - ILT_CLI_TM, - ILT_CLI_SRC, - ILT_CLI_TSDM, - ILT_CLI_MAX -}; - -struct ilt_cfg_pair { - u32 reg; - u32 val; -}; - -struct qed_ilt_cli_blk { - u32 total_size; /* 0 means not active */ - u32 real_size_in_page; - u32 start_line; - u32 dynamic_line_cnt; -}; - -struct qed_ilt_client_cfg { - bool active; - - /* ILT boundaries */ - struct ilt_cfg_pair first; - struct ilt_cfg_pair last; - struct ilt_cfg_pair p_size; - - /* ILT client blocks for PF */ - struct qed_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; - u32 pf_total_lines; - - /* ILT client blocks for VFs */ - struct qed_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; - u32 vf_total_lines; -}; - -/* Per Path - - * ILT shadow table - * Protocol acquired CID lists - * PF start line in ILT - */ -struct qed_dma_mem { - dma_addr_t p_phys; - void *p_virt; - size_t size; -}; - -struct qed_cid_acquired_map { - u32 start_cid; - u32 max_count; - unsigned long *cid_map; -}; - -struct qed_cxt_mngr { - /* Per protocl configuration */ - struct qed_conn_type_cfg conn_cfg[MAX_CONN_TYPES]; - - /* computed ILT structure */ - struct qed_ilt_client_cfg clients[ILT_CLI_MAX]; - - /* Task type sizes */ - u32 task_type_size[NUM_TASK_TYPES]; - - /* total number of VFs for this hwfn - - * ALL VFs are symmetric in terms of HW resources - */ - u32 vf_count; - - /* Acquired CIDs */ - struct qed_cid_acquired_map acquired[MAX_CONN_TYPES]; - - struct qed_cid_acquired_map - acquired_vf[MAX_CONN_TYPES][MAX_NUM_VFS]; - - /* ILT shadow table */ - struct qed_dma_mem *ilt_shadow; - u32 pf_start_line; - - /* Mutex for a dynamic ILT allocation */ - struct mutex mutex; - - /* SRC T2 */ - struct qed_dma_mem *t2; - u32 t2_num_pages; - u64 first_free; - u64 last_free; - - /* total number of SRQ's for this hwfn */ - u32 srq_count; - - /* Maximal number of L2 steering filters */ - u32 arfs_count; -}; static bool src_proto(enum protocol_type type) { return type == PROTOCOLID_ISCSI || @@ -880,30 +754,60 @@ u32 qed_cxt_cfg_ilt_compute_excess(struct qed_hwfn *p_hwfn, u32 used_lines) static void qed_cxt_src_t2_free(struct qed_hwfn *p_hwfn) { - struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; + struct qed_src_t2 *p_t2 = &p_hwfn->p_cxt_mngr->src_t2; u32 i; - if (!p_mngr->t2) + if (!p_t2 || !p_t2->dma_mem) return; - for (i = 0; i < p_mngr->t2_num_pages; i++) - if (p_mngr->t2[i].p_virt) + for (i = 0; i < p_t2->num_pages; i++) + if (p_t2->dma_mem[i].virt_addr) dma_free_coherent(&p_hwfn->cdev->pdev->dev, - p_mngr->t2[i].size, - p_mngr->t2[i].p_virt, - p_mngr->t2[i].p_phys); + p_t2->dma_mem[i].size, + p_t2->dma_mem[i].virt_addr, + p_t2->dma_mem[i].phys_addr); + + kfree(p_t2->dma_mem); + p_t2->dma_mem = NULL; +} + +static int +qed_cxt_t2_alloc_pages(struct qed_hwfn *p_hwfn, + struct qed_src_t2 *p_t2, u32 total_size, u32 page_size) +{ + void **p_virt; + u32 size, i; + + if (!p_t2 || !p_t2->dma_mem) + return -EINVAL; - kfree(p_mngr->t2); - p_mngr->t2 = NULL; + for (i = 0; i < p_t2->num_pages; i++) { + size = min_t(u32, total_size, page_size); + p_virt = &p_t2->dma_mem[i].virt_addr; + + *p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, + size, + &p_t2->dma_mem[i].phys_addr, + GFP_KERNEL); + if (!p_t2->dma_mem[i].virt_addr) + return -ENOMEM; + + memset(*p_virt, 0, size); + p_t2->dma_mem[i].size = size; + total_size -= size; + } + + return 0; } static int qed_cxt_src_t2_alloc(struct qed_hwfn *p_hwfn) { struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; u32 conn_num, total_size, ent_per_page, psz, i; + struct phys_mem_desc *p_t2_last_page; struct qed_ilt_client_cfg *p_src; struct qed_src_iids src_iids; - struct qed_dma_mem *p_t2; + struct qed_src_t2 *p_t2; int rc; memset(&src_iids, 0, sizeof(src_iids)); @@ -921,49 +825,39 @@ static int qed_cxt_src_t2_alloc(struct qed_hwfn *p_hwfn) /* use the same page size as the SRC ILT client */ psz = ILT_PAGE_IN_BYTES(p_src->p_size.val); - p_mngr->t2_num_pages = DIV_ROUND_UP(total_size, psz); + p_t2 = &p_mngr->src_t2; + p_t2->num_pages = DIV_ROUND_UP(total_size, psz); /* allocate t2 */ - p_mngr->t2 = kcalloc(p_mngr->t2_num_pages, sizeof(struct qed_dma_mem), - GFP_KERNEL); - if (!p_mngr->t2) { + p_t2->dma_mem = kcalloc(p_t2->num_pages, sizeof(struct phys_mem_desc), + GFP_KERNEL); + if (!p_t2->dma_mem) { + DP_NOTICE(p_hwfn, "Failed to allocate t2 table\n"); rc = -ENOMEM; goto t2_fail; } - /* allocate t2 pages */ - for (i = 0; i < p_mngr->t2_num_pages; i++) { - u32 size = min_t(u32, total_size, psz); - void **p_virt = &p_mngr->t2[i].p_virt; - - *p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, size, - &p_mngr->t2[i].p_phys, - GFP_KERNEL); - if (!p_mngr->t2[i].p_virt) { - rc = -ENOMEM; - goto t2_fail; - } - p_mngr->t2[i].size = size; - total_size -= size; - } + rc = qed_cxt_t2_alloc_pages(p_hwfn, p_t2, total_size, psz); + if (rc) + goto t2_fail; /* Set the t2 pointers */ /* entries per page - must be a power of two */ ent_per_page = psz / sizeof(struct src_ent); - p_mngr->first_free = (u64) p_mngr->t2[0].p_phys; + p_t2->first_free = (u64)p_t2->dma_mem[0].phys_addr; - p_t2 = &p_mngr->t2[(conn_num - 1) / ent_per_page]; - p_mngr->last_free = (u64) p_t2->p_phys + + p_t2_last_page = &p_t2->dma_mem[(conn_num - 1) / ent_per_page]; + p_t2->last_free = (u64)p_t2_last_page->phys_addr + ((conn_num - 1) & (ent_per_page - 1)) * sizeof(struct src_ent); - for (i = 0; i < p_mngr->t2_num_pages; i++) { + for (i = 0; i < p_t2->num_pages; i++) { u32 ent_num = min_t(u32, ent_per_page, conn_num); - struct src_ent *entries = p_mngr->t2[i].p_virt; - u64 p_ent_phys = (u64) p_mngr->t2[i].p_phys, val; + struct src_ent *entries = p_t2->dma_mem[i].virt_addr; + u64 p_ent_phys = (u64)p_t2->dma_mem[i].phys_addr, val; u32 j; for (j = 0; j < ent_num - 1; j++) { @@ -971,8 +865,8 @@ static int qed_cxt_src_t2_alloc(struct qed_hwfn *p_hwfn) entries[j].next = cpu_to_be64(val); } - if (i < p_mngr->t2_num_pages - 1) - val = (u64) p_mngr->t2[i + 1].p_phys; + if (i < p_t2->num_pages - 1) + val = (u64)p_t2->dma_mem[i + 1].phys_addr; else val = 0; entries[j].next = cpu_to_be64(val); @@ -988,7 +882,7 @@ t2_fail: } #define for_each_ilt_valid_client(pos, clients) \ - for (pos = 0; pos < ILT_CLI_MAX; pos++) \ + for (pos = 0; pos < MAX_ILT_CLIENTS; pos++) \ if (!clients[pos].active) { \ continue; \ } else \ @@ -1014,13 +908,13 @@ static void qed_ilt_shadow_free(struct qed_hwfn *p_hwfn) ilt_size = qed_cxt_ilt_shadow_size(p_cli); for (i = 0; p_mngr->ilt_shadow && i < ilt_size; i++) { - struct qed_dma_mem *p_dma = &p_mngr->ilt_shadow[i]; + struct phys_mem_desc *p_dma = &p_mngr->ilt_shadow[i]; - if (p_dma->p_virt) + if (p_dma->virt_addr) dma_free_coherent(&p_hwfn->cdev->pdev->dev, - p_dma->size, p_dma->p_virt, - p_dma->p_phys); - p_dma->p_virt = NULL; + p_dma->size, p_dma->virt_addr, + p_dma->phys_addr); + p_dma->virt_addr = NULL; } kfree(p_mngr->ilt_shadow); } @@ -1030,7 +924,7 @@ static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn, enum ilt_clients ilt_client, u32 start_line_offset) { - struct qed_dma_mem *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow; + struct phys_mem_desc *ilt_shadow = p_hwfn->p_cxt_mngr->ilt_shadow; u32 lines, line, sz_left, lines_to_skip = 0; /* Special handling for RoCE that supports dynamic allocation */ @@ -1059,8 +953,8 @@ static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn, if (!p_virt) return -ENOMEM; - ilt_shadow[line].p_phys = p_phys; - ilt_shadow[line].p_virt = p_virt; + ilt_shadow[line].phys_addr = p_phys; + ilt_shadow[line].virt_addr = p_virt; ilt_shadow[line].size = size; DP_VERBOSE(p_hwfn, QED_MSG_ILT, @@ -1083,7 +977,7 @@ static int qed_ilt_shadow_alloc(struct qed_hwfn *p_hwfn) int rc; size = qed_cxt_ilt_shadow_size(clients); - p_mngr->ilt_shadow = kcalloc(size, sizeof(struct qed_dma_mem), + p_mngr->ilt_shadow = kcalloc(size, sizeof(struct phys_mem_desc), GFP_KERNEL); if (!p_mngr->ilt_shadow) { rc = -ENOMEM; @@ -1092,7 +986,7 @@ static int qed_ilt_shadow_alloc(struct qed_hwfn *p_hwfn) DP_VERBOSE(p_hwfn, QED_MSG_ILT, "Allocated 0x%x bytes for ilt shadow\n", - (u32)(size * sizeof(struct qed_dma_mem))); + (u32)(size * sizeof(struct phys_mem_desc))); for_each_ilt_valid_client(i, clients) { for (j = 0; j < ILT_CLI_PF_BLOCKS; j++) { @@ -1238,15 +1132,20 @@ int qed_cxt_mngr_alloc(struct qed_hwfn *p_hwfn) clients[ILT_CLI_TSDM].last.reg = ILT_CFG_REG(TSDM, LAST_ILT); clients[ILT_CLI_TSDM].p_size.reg = ILT_CFG_REG(TSDM, P_SIZE); /* default ILT page size for all clients is 64K */ - for (i = 0; i < ILT_CLI_MAX; i++) + for (i = 0; i < MAX_ILT_CLIENTS; i++) p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE; + p_mngr->conn_ctx_size = CONN_CXT_SIZE(p_hwfn); + /* Initialize task sizes */ p_mngr->task_type_size[0] = TYPE0_TASK_CXT_SIZE(p_hwfn); p_mngr->task_type_size[1] = TYPE1_TASK_CXT_SIZE(p_hwfn); - if (p_hwfn->cdev->p_iov_info) + if (p_hwfn->cdev->p_iov_info) { p_mngr->vf_count = p_hwfn->cdev->p_iov_info->total_vfs; + p_mngr->first_vf_in_pf = + p_hwfn->cdev->p_iov_info->first_vf_in_pf; + } /* Initialize the dynamic ILT allocation mutex */ mutex_init(&p_mngr->mutex); @@ -1673,7 +1572,7 @@ static void qed_ilt_init_pf(struct qed_hwfn *p_hwfn) { struct qed_ilt_client_cfg *clients; struct qed_cxt_mngr *p_mngr; - struct qed_dma_mem *p_shdw; + struct phys_mem_desc *p_shdw; u32 line, rt_offst, i; qed_ilt_bounds_init(p_hwfn); @@ -1698,15 +1597,15 @@ static void qed_ilt_init_pf(struct qed_hwfn *p_hwfn) /** p_virt could be NULL incase of dynamic * allocation */ - if (p_shdw[line].p_virt) { + if (p_shdw[line].virt_addr) { SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL); SET_FIELD(ilt_hw_entry, ILT_ENTRY_PHY_ADDR, - (p_shdw[line].p_phys >> 12)); + (p_shdw[line].phys_addr >> 12)); DP_VERBOSE(p_hwfn, QED_MSG_ILT, "Setting RT[0x%08x] from ILT[0x%08x] [Client is %d] to Physical addr: 0x%llx\n", rt_offst, line, i, - (u64)(p_shdw[line].p_phys >> 12)); + (u64)(p_shdw[line].phys_addr >> 12)); } STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry); @@ -2049,10 +1948,10 @@ int qed_cxt_get_cid_info(struct qed_hwfn *p_hwfn, struct qed_cxt_info *p_info) line = p_info->iid / cxts_per_p; /* Make sure context is allocated (dynamic allocation) */ - if (!p_mngr->ilt_shadow[line].p_virt) + if (!p_mngr->ilt_shadow[line].virt_addr) return -EINVAL; - p_info->p_cxt = p_mngr->ilt_shadow[line].p_virt + + p_info->p_cxt = p_mngr->ilt_shadow[line].virt_addr + p_info->iid % cxts_per_p * conn_cxt_size; DP_VERBOSE(p_hwfn, (QED_MSG_ILT | QED_MSG_CXT), @@ -2233,7 +2132,7 @@ int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn, for (i = 0; i < total_lines; i++) { shadow_line = i + p_fl_seg->start_line - p_hwfn->p_cxt_mngr->pf_start_line; - p_info->blocks[i] = p_mngr->ilt_shadow[shadow_line].p_virt; + p_info->blocks[i] = p_mngr->ilt_shadow[shadow_line].virt_addr; } p_info->waste = ILT_PAGE_IN_BYTES(p_cli->p_size.val) - p_fl_seg->real_size_in_page; @@ -2295,7 +2194,7 @@ qed_cxt_dynamic_ilt_alloc(struct qed_hwfn *p_hwfn, mutex_lock(&p_hwfn->p_cxt_mngr->mutex); - if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt) + if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr) goto out0; p_ptt = qed_ptt_acquire(p_hwfn); @@ -2333,8 +2232,8 @@ qed_cxt_dynamic_ilt_alloc(struct qed_hwfn *p_hwfn, } } - p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt = p_virt; - p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys = p_phys; + p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].virt_addr = p_virt; + p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr = p_phys; p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size = p_blk->real_size_in_page; @@ -2344,9 +2243,9 @@ qed_cxt_dynamic_ilt_alloc(struct qed_hwfn *p_hwfn, ilt_hw_entry = 0; SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL); - SET_FIELD(ilt_hw_entry, - ILT_ENTRY_PHY_ADDR, - (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys >> 12)); + SET_FIELD(ilt_hw_entry, ILT_ENTRY_PHY_ADDR, + (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].phys_addr + >> 12)); /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */ qed_dmae_host2grc(p_hwfn, p_ptt, (u64) (uintptr_t)&ilt_hw_entry, @@ -2433,16 +2332,16 @@ qed_cxt_free_ilt_range(struct qed_hwfn *p_hwfn, } for (i = shadow_start_line; i < shadow_end_line; i++) { - if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt) + if (!p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr) continue; dma_free_coherent(&p_hwfn->cdev->pdev->dev, p_hwfn->p_cxt_mngr->ilt_shadow[i].size, - p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt, - p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys); + p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr, + p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr); - p_hwfn->p_cxt_mngr->ilt_shadow[i].p_virt = NULL; - p_hwfn->p_cxt_mngr->ilt_shadow[i].p_phys = 0; + p_hwfn->p_cxt_mngr->ilt_shadow[i].virt_addr = NULL; + p_hwfn->p_cxt_mngr->ilt_shadow[i].phys_addr = 0; p_hwfn->p_cxt_mngr->ilt_shadow[i].size = 0; /* compute absolute offset */ @@ -2546,8 +2445,76 @@ int qed_cxt_get_task_ctx(struct qed_hwfn *p_hwfn, ilt_idx = tid / num_tids_per_block + p_seg->start_line - p_mngr->pf_start_line; - *pp_task_ctx = (u8 *)p_mngr->ilt_shadow[ilt_idx].p_virt + + *pp_task_ctx = (u8 *)p_mngr->ilt_shadow[ilt_idx].virt_addr + (tid % num_tids_per_block) * tid_size; return 0; } + +static u16 qed_blk_calculate_pages(struct qed_ilt_cli_blk *p_blk) +{ + if (p_blk->real_size_in_page == 0) + return 0; + + return DIV_ROUND_UP(p_blk->total_size, p_blk->real_size_in_page); +} + +u16 qed_get_cdut_num_pf_init_pages(struct qed_hwfn *p_hwfn) +{ + struct qed_ilt_client_cfg *p_cli; + struct qed_ilt_cli_blk *p_blk; + u16 i, pages = 0; + + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; + for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) { + p_blk = &p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)]; + pages += qed_blk_calculate_pages(p_blk); + } + + return pages; +} + +u16 qed_get_cdut_num_vf_init_pages(struct qed_hwfn *p_hwfn) +{ + struct qed_ilt_client_cfg *p_cli; + struct qed_ilt_cli_blk *p_blk; + u16 i, pages = 0; + + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; + for (i = 0; i < NUM_TASK_VF_SEGMENTS; i++) { + p_blk = &p_cli->vf_blks[CDUT_FL_SEG_BLK(i, VF)]; + pages += qed_blk_calculate_pages(p_blk); + } + + return pages; +} + +u16 qed_get_cdut_num_pf_work_pages(struct qed_hwfn *p_hwfn) +{ + struct qed_ilt_client_cfg *p_cli; + struct qed_ilt_cli_blk *p_blk; + u16 i, pages = 0; + + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; + for (i = 0; i < NUM_TASK_PF_SEGMENTS; i++) { + p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(i)]; + pages += qed_blk_calculate_pages(p_blk); + } + + return pages; +} + +u16 qed_get_cdut_num_vf_work_pages(struct qed_hwfn *p_hwfn) +{ + struct qed_ilt_client_cfg *p_cli; + struct qed_ilt_cli_blk *p_blk; + u16 pages = 0, i; + + p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT]; + for (i = 0; i < NUM_TASK_VF_SEGMENTS; i++) { + p_blk = &p_cli->vf_blks[CDUT_SEG_BLK(i)]; + pages += qed_blk_calculate_pages(p_blk); + } + + return pages; +} diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.h b/drivers/net/ethernet/qlogic/qed/qed_cxt.h index 758a8b4c0de8..c4e815f6cabd 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_cxt.h +++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.h @@ -242,4 +242,134 @@ int qed_cxt_free_proto_ilt(struct qed_hwfn *p_hwfn, enum protocol_type proto); #define QED_CTX_FL_MEM 1 int qed_cxt_get_task_ctx(struct qed_hwfn *p_hwfn, u32 tid, u8 ctx_type, void **task_ctx); + +/* Max number of connection types in HW (DQ/CDU etc.) */ +#define MAX_CONN_TYPES PROTOCOLID_COMMON +#define NUM_TASK_TYPES 2 +#define NUM_TASK_PF_SEGMENTS 4 +#define NUM_TASK_VF_SEGMENTS 1 + +/* PF per protocl configuration object */ +#define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS) +#define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS) + +struct qed_tid_seg { + u32 count; + u8 type; + bool has_fl_mem; +}; + +struct qed_conn_type_cfg { + u32 cid_count; + u32 cids_per_vf; + struct qed_tid_seg tid_seg[TASK_SEGMENTS]; +}; + +/* ILT Client configuration, + * Per connection type (protocol) resources (cids, tis, vf cids etc.) + * 1 - for connection context (CDUC) and for each task context we need two + * values, for regular task context and for force load memory + */ +#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2) +#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2) +#define CDUC_BLK (0) +#define SRQ_BLK (0) +#define CDUT_SEG_BLK(n) (1 + (u8)(n)) +#define CDUT_FL_SEG_BLK(n, X) (1 + (n) + NUM_TASK_ ## X ## _SEGMENTS) + +struct ilt_cfg_pair { + u32 reg; + u32 val; +}; + +struct qed_ilt_cli_blk { + u32 total_size; /* 0 means not active */ + u32 real_size_in_page; + u32 start_line; + u32 dynamic_line_offset; + u32 dynamic_line_cnt; +}; + +struct qed_ilt_client_cfg { + bool active; + + /* ILT boundaries */ + struct ilt_cfg_pair first; + struct ilt_cfg_pair last; + struct ilt_cfg_pair p_size; + + /* ILT client blocks for PF */ + struct qed_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; + u32 pf_total_lines; + + /* ILT client blocks for VFs */ + struct qed_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; + u32 vf_total_lines; +}; + +struct qed_cid_acquired_map { + u32 start_cid; + u32 max_count; + unsigned long *cid_map; +}; + +struct qed_src_t2 { + struct phys_mem_desc *dma_mem; + u32 num_pages; + u64 first_free; + u64 last_free; +}; + +struct qed_cxt_mngr { + /* Per protocl configuration */ + struct qed_conn_type_cfg conn_cfg[MAX_CONN_TYPES]; + + /* computed ILT structure */ + struct qed_ilt_client_cfg clients[MAX_ILT_CLIENTS]; + + /* Task type sizes */ + u32 task_type_size[NUM_TASK_TYPES]; + + /* total number of VFs for this hwfn - + * ALL VFs are symmetric in terms of HW resources + */ + u32 vf_count; + u32 first_vf_in_pf; + + /* Acquired CIDs */ + struct qed_cid_acquired_map acquired[MAX_CONN_TYPES]; + + struct qed_cid_acquired_map + acquired_vf[MAX_CONN_TYPES][MAX_NUM_VFS]; + + /* ILT shadow table */ + struct phys_mem_desc *ilt_shadow; + u32 ilt_shadow_size; + u32 pf_start_line; + + /* Mutex for a dynamic ILT allocation */ + struct mutex mutex; + + /* SRC T2 */ + struct qed_src_t2 src_t2; + u32 t2_num_pages; + u64 first_free; + u64 last_free; + + /* total number of SRQ's for this hwfn */ + u32 srq_count; + + /* Maximal number of L2 steering filters */ + u32 arfs_count; + + u8 task_type_id; + u16 task_ctx_size; + u16 conn_ctx_size; +}; + +u16 qed_get_cdut_num_pf_init_pages(struct qed_hwfn *p_hwfn); +u16 qed_get_cdut_num_vf_init_pages(struct qed_hwfn *p_hwfn); +u16 qed_get_cdut_num_pf_work_pages(struct qed_hwfn *p_hwfn); +u16 qed_get_cdut_num_vf_work_pages(struct qed_hwfn *p_hwfn); + #endif diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c index 859caa6c1a1f..17a9b30e6c62 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_debug.c +++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c @@ -7,6 +7,7 @@ #include #include #include "qed.h" +#include "qed_cxt.h" #include "qed_hsi.h" #include "qed_hw.h" #include "qed_mcp.h" @@ -183,6 +184,7 @@ enum platform_ids { /* Chip constant definitions */ struct chip_defs { const char *name; + u32 num_ilt_pages; }; /* Platform constant definitions */ @@ -380,6 +382,9 @@ struct split_type_defs { #define IDLE_CHK_RESULT_REG_HDR_DWORDS \ BYTES_TO_DWORDS(sizeof(struct dbg_idle_chk_result_reg_hdr)) +#define PAGE_MEM_DESC_SIZE_DWORDS \ + BYTES_TO_DWORDS(sizeof(struct phys_mem_desc)) + #define IDLE_CHK_MAX_ENTRIES_SIZE 32 /* The sizes and offsets below are specified in bits */ @@ -464,9 +469,8 @@ static struct dbg_array s_dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE] = { {NULL} }; /* Chip constant definitions array */ static struct chip_defs s_chip_defs[MAX_CHIP_IDS] = { - {"bb"}, - {"ah"}, - {"reserved"}, + {"bb", PSWRQ2_REG_ILT_MEMORY_SIZE_BB / 2}, + {"ah", PSWRQ2_REG_ILT_MEMORY_SIZE_K2 / 2} }; /* Storm constant definitions array */ @@ -1627,7 +1631,22 @@ static struct grc_param_defs s_grc_param_defs[] = { {{0, 0, 0}, 0, 1, false, false, 0, 0}, /* DBG_GRC_PARAM_NO_FW_VER */ - {{0, 0, 0}, 0, 1, false, false, 0, 0} + {{0, 0, 0}, 0, 1, false, false, 0, 0}, + + /* DBG_GRC_PARAM_RESERVED3 */ + {{0, 0, 0}, 0, 1, false, false, 0, 0}, + + /* DBG_GRC_PARAM_DUMP_MCP_HW_DUMP */ + {{0, 1, 1}, 0, 1, false, false, 0, 0}, + + /* DBG_GRC_PARAM_DUMP_ILT_CDUC */ + {{1, 1, 1}, 0, 1, false, false, 0, 0}, + + /* DBG_GRC_PARAM_DUMP_ILT_CDUT */ + {{1, 1, 1}, 0, 1, false, false, 0, 0}, + + /* DBG_GRC_PARAM_DUMP_CAU_EXT */ + {{0, 0, 0}, 0, 1, false, false, 0, 1} }; static struct rss_mem_defs s_rss_mem_defs[] = { @@ -4992,16 +5011,18 @@ static enum dbg_status qed_protection_override_dump(struct qed_hwfn *p_hwfn, override_window_dwords = qed_rd(p_hwfn, p_ptt, GRC_REG_NUMBER_VALID_OVERRIDE_WINDOW) * PROTECTION_OVERRIDE_ELEMENT_DWORDS; - addr = BYTES_TO_DWORDS(GRC_REG_PROTECTION_OVERRIDE_WINDOW); - offset += qed_grc_dump_addr_range(p_hwfn, - p_ptt, - dump_buf + offset, - true, - addr, - override_window_dwords, - true, SPLIT_TYPE_NONE, 0); - qed_dump_num_param(dump_buf + size_param_offset, dump, "size", - override_window_dwords); + if (override_window_dwords) { + addr = BYTES_TO_DWORDS(GRC_REG_PROTECTION_OVERRIDE_WINDOW); + offset += qed_grc_dump_addr_range(p_hwfn, + p_ptt, + dump_buf + offset, + true, + addr, + override_window_dwords, + true, SPLIT_TYPE_NONE, 0); + qed_dump_num_param(dump_buf + size_param_offset, dump, "size", + override_window_dwords); + } out: /* Dump last section */ offset += qed_dump_last_section(dump_buf, offset, dump); @@ -5088,6 +5109,348 @@ static u32 qed_fw_asserts_dump(struct qed_hwfn *p_hwfn, return offset; } +/* Dumps the specified ILT pages to the specified buffer. + * Returns the dumped size in dwords. + */ +static u32 qed_ilt_dump_pages_range(u32 *dump_buf, + bool dump, + u32 start_page_id, + u32 num_pages, + struct phys_mem_desc *ilt_pages, + bool dump_page_ids) +{ + u32 page_id, end_page_id, offset = 0; + + if (num_pages == 0) + return offset; + + end_page_id = start_page_id + num_pages - 1; + + for (page_id = start_page_id; page_id <= end_page_id; page_id++) { + struct phys_mem_desc *mem_desc = &ilt_pages[page_id]; + + /** + * + * if (page_id >= ->p_cxt_mngr->ilt_shadow_size) + * break; + */ + + if (!ilt_pages[page_id].virt_addr) + continue; + + if (dump_page_ids) { + /* Copy page ID to dump buffer */ + if (dump) + *(dump_buf + offset) = page_id; + offset++; + } else { + /* Copy page memory to dump buffer */ + if (dump) + memcpy(dump_buf + offset, + mem_desc->virt_addr, mem_desc->size); + offset += BYTES_TO_DWORDS(mem_desc->size); + } + } + + return offset; +} + +/* Dumps a section containing the dumped ILT pages. + * Returns the dumped size in dwords. + */ +static u32 qed_ilt_dump_pages_section(struct qed_hwfn *p_hwfn, + u32 *dump_buf, + bool dump, + u32 valid_conn_pf_pages, + u32 valid_conn_vf_pages, + struct phys_mem_desc *ilt_pages, + bool dump_page_ids) +{ + struct qed_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients; + u32 pf_start_line, start_page_id, offset = 0; + u32 cdut_pf_init_pages, cdut_vf_init_pages; + u32 cdut_pf_work_pages, cdut_vf_work_pages; + u32 base_data_offset, size_param_offset; + u32 cdut_pf_pages, cdut_vf_pages; + const char *section_name; + u8 i; + + section_name = dump_page_ids ? "ilt_page_ids" : "ilt_page_mem"; + cdut_pf_init_pages = qed_get_cdut_num_pf_init_pages(p_hwfn); + cdut_vf_init_pages = qed_get_cdut_num_vf_init_pages(p_hwfn); + cdut_pf_work_pages = qed_get_cdut_num_pf_work_pages(p_hwfn); + cdut_vf_work_pages = qed_get_cdut_num_vf_work_pages(p_hwfn); + cdut_pf_pages = cdut_pf_init_pages + cdut_pf_work_pages; + cdut_vf_pages = cdut_vf_init_pages + cdut_vf_work_pages; + pf_start_line = p_hwfn->p_cxt_mngr->pf_start_line; + + offset += + qed_dump_section_hdr(dump_buf + offset, dump, section_name, 1); + + /* Dump size parameter (0 for now, overwritten with real size later) */ + size_param_offset = offset; + offset += qed_dump_num_param(dump_buf + offset, dump, "size", 0); + base_data_offset = offset; + + /* CDUC pages are ordered as follows: + * - PF pages - valid section (included in PF connection type mapping) + * - PF pages - invalid section (not dumped) + * - For each VF in the PF: + * - VF pages - valid section (included in VF connection type mapping) + * - VF pages - invalid section (not dumped) + */ + if (qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_DUMP_ILT_CDUC)) { + /* Dump connection PF pages */ + start_page_id = clients[ILT_CLI_CDUC].first.val - pf_start_line; + offset += qed_ilt_dump_pages_range(dump_buf + offset, + dump, + start_page_id, + valid_conn_pf_pages, + ilt_pages, dump_page_ids); + + /* Dump connection VF pages */ + start_page_id += clients[ILT_CLI_CDUC].pf_total_lines; + for (i = 0; i < p_hwfn->p_cxt_mngr->vf_count; + i++, start_page_id += clients[ILT_CLI_CDUC].vf_total_lines) + offset += qed_ilt_dump_pages_range(dump_buf + offset, + dump, + start_page_id, + valid_conn_vf_pages, + ilt_pages, + dump_page_ids); + } + + /* CDUT pages are ordered as follows: + * - PF init pages (not dumped) + * - PF work pages + * - For each VF in the PF: + * - VF init pages (not dumped) + * - VF work pages + */ + if (qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_DUMP_ILT_CDUT)) { + /* Dump task PF pages */ + start_page_id = clients[ILT_CLI_CDUT].first.val + + cdut_pf_init_pages - pf_start_line; + offset += qed_ilt_dump_pages_range(dump_buf + offset, + dump, + start_page_id, + cdut_pf_work_pages, + ilt_pages, dump_page_ids); + + /* Dump task VF pages */ + start_page_id = clients[ILT_CLI_CDUT].first.val + + cdut_pf_pages + cdut_vf_init_pages - pf_start_line; + for (i = 0; i < p_hwfn->p_cxt_mngr->vf_count; + i++, start_page_id += cdut_vf_pages) + offset += qed_ilt_dump_pages_range(dump_buf + offset, + dump, + start_page_id, + cdut_vf_work_pages, + ilt_pages, + dump_page_ids); + } + + /* Overwrite size param */ + if (dump) + qed_dump_num_param(dump_buf + size_param_offset, + dump, "size", offset - base_data_offset); + + return offset; +} + +/* Performs ILT Dump to the specified buffer. + * Returns the dumped size in dwords. + */ +static u32 qed_ilt_dump(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) +{ + struct qed_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients; + u32 valid_conn_vf_cids, valid_conn_vf_pages, offset = 0; + u32 valid_conn_pf_cids, valid_conn_pf_pages, num_pages; + u32 num_cids_per_page, conn_ctx_size; + u32 cduc_page_size, cdut_page_size; + struct phys_mem_desc *ilt_pages; + u8 conn_type; + + cduc_page_size = 1 << + (clients[ILT_CLI_CDUC].p_size.val + PXP_ILT_PAGE_SIZE_NUM_BITS_MIN); + cdut_page_size = 1 << + (clients[ILT_CLI_CDUT].p_size.val + PXP_ILT_PAGE_SIZE_NUM_BITS_MIN); + conn_ctx_size = p_hwfn->p_cxt_mngr->conn_ctx_size; + num_cids_per_page = (int)(cduc_page_size / conn_ctx_size); + ilt_pages = p_hwfn->p_cxt_mngr->ilt_shadow; + + /* Dump global params - 22 must match number of params below */ + offset += qed_dump_common_global_params(p_hwfn, p_ptt, + dump_buf + offset, dump, 22); + offset += qed_dump_str_param(dump_buf + offset, + dump, "dump-type", "ilt-dump"); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cduc-page-size", cduc_page_size); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cduc-first-page-id", + clients[ILT_CLI_CDUC].first.val); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cduc-last-page-id", + clients[ILT_CLI_CDUC].last.val); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cduc-num-pf-pages", + clients + [ILT_CLI_CDUC].pf_total_lines); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cduc-num-vf-pages", + clients + [ILT_CLI_CDUC].vf_total_lines); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "max-conn-ctx-size", + conn_ctx_size); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-page-size", cdut_page_size); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-first-page-id", + clients[ILT_CLI_CDUT].first.val); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-last-page-id", + clients[ILT_CLI_CDUT].last.val); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-num-pf-init-pages", + qed_get_cdut_num_pf_init_pages(p_hwfn)); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-num-vf-init-pages", + qed_get_cdut_num_vf_init_pages(p_hwfn)); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-num-pf-work-pages", + qed_get_cdut_num_pf_work_pages(p_hwfn)); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "cdut-num-vf-work-pages", + qed_get_cdut_num_vf_work_pages(p_hwfn)); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "max-task-ctx-size", + p_hwfn->p_cxt_mngr->task_ctx_size); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "task-type-id", + p_hwfn->p_cxt_mngr->task_type_id); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "first-vf-id-in-pf", + p_hwfn->p_cxt_mngr->first_vf_in_pf); + offset += /* 18 */ qed_dump_num_param(dump_buf + offset, + dump, + "num-vfs-in-pf", + p_hwfn->p_cxt_mngr->vf_count); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "ptr-size-bytes", sizeof(void *)); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "pf-start-line", + p_hwfn->p_cxt_mngr->pf_start_line); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "page-mem-desc-size-dwords", + PAGE_MEM_DESC_SIZE_DWORDS); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "ilt-shadow-size", + p_hwfn->p_cxt_mngr->ilt_shadow_size); + /* Additional/Less parameters require matching of number in call to + * dump_common_global_params() + */ + + /* Dump section containing number of PF CIDs per connection type */ + offset += qed_dump_section_hdr(dump_buf + offset, + dump, "num_pf_cids_per_conn_type", 1); + offset += qed_dump_num_param(dump_buf + offset, + dump, "size", NUM_OF_CONNECTION_TYPES_E4); + for (conn_type = 0, valid_conn_pf_cids = 0; + conn_type < NUM_OF_CONNECTION_TYPES_E4; conn_type++, offset++) { + u32 num_pf_cids = + p_hwfn->p_cxt_mngr->conn_cfg[conn_type].cid_count; + + if (dump) + *(dump_buf + offset) = num_pf_cids; + valid_conn_pf_cids += num_pf_cids; + } + + /* Dump section containing number of VF CIDs per connection type */ + offset += qed_dump_section_hdr(dump_buf + offset, + dump, "num_vf_cids_per_conn_type", 1); + offset += qed_dump_num_param(dump_buf + offset, + dump, "size", NUM_OF_CONNECTION_TYPES_E4); + for (conn_type = 0, valid_conn_vf_cids = 0; + conn_type < NUM_OF_CONNECTION_TYPES_E4; conn_type++, offset++) { + u32 num_vf_cids = + p_hwfn->p_cxt_mngr->conn_cfg[conn_type].cids_per_vf; + + if (dump) + *(dump_buf + offset) = num_vf_cids; + valid_conn_vf_cids += num_vf_cids; + } + + /* Dump section containing physical memory descs for each ILT page */ + num_pages = p_hwfn->p_cxt_mngr->ilt_shadow_size; + offset += qed_dump_section_hdr(dump_buf + offset, + dump, "ilt_page_desc", 1); + offset += qed_dump_num_param(dump_buf + offset, + dump, + "size", + num_pages * PAGE_MEM_DESC_SIZE_DWORDS); + + /* Copy memory descriptors to dump buffer */ + if (dump) { + u32 page_id; + + for (page_id = 0; page_id < num_pages; + page_id++, offset += PAGE_MEM_DESC_SIZE_DWORDS) + memcpy(dump_buf + offset, + &ilt_pages[page_id], + DWORDS_TO_BYTES(PAGE_MEM_DESC_SIZE_DWORDS)); + } else { + offset += num_pages * PAGE_MEM_DESC_SIZE_DWORDS; + } + + valid_conn_pf_pages = DIV_ROUND_UP(valid_conn_pf_cids, + num_cids_per_page); + valid_conn_vf_pages = DIV_ROUND_UP(valid_conn_vf_cids, + num_cids_per_page); + + /* Dump ILT pages IDs */ + offset += qed_ilt_dump_pages_section(p_hwfn, + dump_buf + offset, + dump, + valid_conn_pf_pages, + valid_conn_vf_pages, + ilt_pages, true); + + /* Dump ILT pages memory */ + offset += qed_ilt_dump_pages_section(p_hwfn, + dump_buf + offset, + dump, + valid_conn_pf_pages, + valid_conn_vf_pages, + ilt_pages, false); + + /* Dump last section */ + offset += qed_dump_last_section(dump_buf, offset, dump); + + return offset; +} + /***************************** Public Functions *******************************/ enum dbg_status qed_dbg_set_bin_ptr(const u8 * const bin_ptr) @@ -5554,6 +5917,50 @@ enum dbg_status qed_dbg_fw_asserts_dump(struct qed_hwfn *p_hwfn, return DBG_STATUS_OK; } +static enum dbg_status qed_dbg_ilt_get_dump_buf_size(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 *buf_size) +{ + enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + + *buf_size = 0; + + if (status != DBG_STATUS_OK) + return status; + + *buf_size = qed_ilt_dump(p_hwfn, p_ptt, NULL, false); + + return DBG_STATUS_OK; +} + +static enum dbg_status qed_dbg_ilt_dump(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 *dump_buf, + u32 buf_size_in_dwords, + u32 *num_dumped_dwords) +{ + u32 needed_buf_size_in_dwords; + enum dbg_status status; + + *num_dumped_dwords = 0; + + status = qed_dbg_ilt_get_dump_buf_size(p_hwfn, + p_ptt, + &needed_buf_size_in_dwords); + if (status != DBG_STATUS_OK) + return status; + + if (buf_size_in_dwords < needed_buf_size_in_dwords) + return DBG_STATUS_DUMP_BUF_TOO_SMALL; + + *num_dumped_dwords = qed_ilt_dump(p_hwfn, p_ptt, dump_buf, true); + + /* Reveret GRC params to their default */ + qed_dbg_grc_set_params_default(p_hwfn); + + return DBG_STATUS_OK; +} + enum dbg_status qed_dbg_read_attn(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, enum block_id block_id, @@ -7763,7 +8170,10 @@ static struct { qed_dbg_fw_asserts_get_dump_buf_size, qed_dbg_fw_asserts_dump, qed_print_fw_asserts_results, - qed_get_fw_asserts_results_buf_size},}; + qed_get_fw_asserts_results_buf_size}, { + "ilt", + qed_dbg_ilt_get_dump_buf_size, + qed_dbg_ilt_dump, NULL, NULL},}; static void qed_dbg_print_feature(u8 *p_text_buf, u32 text_size) { @@ -7846,6 +8256,8 @@ static enum dbg_status format_feature(struct qed_hwfn *p_hwfn, return rc; } +#define MAX_DBG_FEATURE_SIZE_DWORDS 0x3FFFFFFF + /* Generic function for performing the dump of a debug feature. */ static enum dbg_status qed_dbg_dump(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, @@ -8021,6 +8433,16 @@ int qed_dbg_fw_asserts_size(struct qed_dev *cdev) return qed_dbg_feature_size(cdev, DBG_FEATURE_FW_ASSERTS); } +int qed_dbg_ilt(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes) +{ + return qed_dbg_feature(cdev, buffer, DBG_FEATURE_ILT, num_dumped_bytes); +} + +int qed_dbg_ilt_size(struct qed_dev *cdev) +{ + return qed_dbg_feature_size(cdev, DBG_FEATURE_ILT); +} + int qed_dbg_mcp_trace(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes) { @@ -8037,9 +8459,17 @@ int qed_dbg_mcp_trace_size(struct qed_dev *cdev) * feature buffer. */ #define REGDUMP_HEADER_SIZE sizeof(u32) +#define REGDUMP_HEADER_SIZE_SHIFT 0 +#define REGDUMP_HEADER_SIZE_MASK 0xffffff #define REGDUMP_HEADER_FEATURE_SHIFT 24 -#define REGDUMP_HEADER_ENGINE_SHIFT 31 +#define REGDUMP_HEADER_FEATURE_MASK 0x3f #define REGDUMP_HEADER_OMIT_ENGINE_SHIFT 30 +#define REGDUMP_HEADER_OMIT_ENGINE_MASK 0x1 +#define REGDUMP_HEADER_ENGINE_SHIFT 31 +#define REGDUMP_HEADER_ENGINE_MASK 0x1 +#define REGDUMP_MAX_SIZE 0x1000000 +#define ILT_DUMP_MAX_SIZE (1024 * 1024 * 15) + enum debug_print_features { OLD_MODE = 0, IDLE_CHK = 1, @@ -8053,6 +8483,8 @@ enum debug_print_features { NVM_CFG1 = 9, DEFAULT_CFG = 10, NVM_META = 11, + MDUMP = 12, + ILT_DUMP = 13, }; static u32 qed_calc_regdump_header(enum debug_print_features feature, @@ -8166,8 +8598,23 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) rc); } - for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) - dev_data->grc.param_val[i] = grc_params[i]; + feature_size = qed_dbg_ilt_size(cdev); + if (!cdev->disable_ilt_dump && + feature_size < ILT_DUMP_MAX_SIZE) { + rc = qed_dbg_ilt(cdev, (u8 *)buffer + offset + + REGDUMP_HEADER_SIZE, &feature_size); + if (!rc) { + *(u32 *)((u8 *)buffer + offset) = + qed_calc_regdump_header(ILT_DUMP, + cur_engine, + feature_size, + omit_engine); + offset += feature_size + REGDUMP_HEADER_SIZE; + } else { + DP_ERR(cdev, "qed_dbg_ilt failed. rc = %d\n", + rc); + } + } /* GRC dump - must be last because when mcp stuck it will * clutter idle_chk, reg_fifo, ... @@ -8243,6 +8690,21 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) QED_NVM_IMAGE_NVM_META, "QED_NVM_IMAGE_NVM_META", rc); } + /* nvm mdump */ + rc = qed_dbg_nvm_image(cdev, (u8 *)buffer + offset + + REGDUMP_HEADER_SIZE, &feature_size, + QED_NVM_IMAGE_MDUMP); + if (!rc) { + *(u32 *)((u8 *)buffer + offset) = + qed_calc_regdump_header(MDUMP, cur_engine, + feature_size, omit_engine); + offset += (feature_size + REGDUMP_HEADER_SIZE); + } else if (rc != -ENOENT) { + DP_ERR(cdev, + "qed_dbg_nvm_image failed for image %d (%s), rc = %d\n", + QED_NVM_IMAGE_MDUMP, "QED_NVM_IMAGE_MDUMP", rc); + } + return 0; } @@ -8250,7 +8712,7 @@ int qed_dbg_all_data_size(struct qed_dev *cdev) { struct qed_hwfn *p_hwfn = &cdev->hwfns[cdev->dbg_params.engine_for_debug]; - u32 regs_len = 0, image_len = 0; + u32 regs_len = 0, image_len = 0, ilt_len = 0, total_ilt_len = 0; u8 cur_engine, org_engine; org_engine = qed_get_debug_engine(cdev); @@ -8267,6 +8729,12 @@ int qed_dbg_all_data_size(struct qed_dev *cdev) REGDUMP_HEADER_SIZE + qed_dbg_protection_override_size(cdev) + REGDUMP_HEADER_SIZE + qed_dbg_fw_asserts_size(cdev); + + ilt_len = REGDUMP_HEADER_SIZE + qed_dbg_ilt_size(cdev); + if (ilt_len < ILT_DUMP_MAX_SIZE) { + total_ilt_len += ilt_len; + regs_len += ilt_len; + } } qed_set_debug_engine(cdev, org_engine); @@ -8282,6 +8750,17 @@ int qed_dbg_all_data_size(struct qed_dev *cdev) qed_dbg_nvm_image_length(p_hwfn, QED_NVM_IMAGE_NVM_META, &image_len); if (image_len) regs_len += REGDUMP_HEADER_SIZE + image_len; + qed_dbg_nvm_image_length(p_hwfn, QED_NVM_IMAGE_MDUMP, &image_len); + if (image_len) + regs_len += REGDUMP_HEADER_SIZE + image_len; + + if (regs_len > REGDUMP_MAX_SIZE) { + DP_VERBOSE(cdev, QED_MSG_DEBUG, + "Dump exceeds max size 0x%x, disable ILT dump\n", + REGDUMP_MAX_SIZE); + cdev->disable_ilt_dump = true; + regs_len -= total_ilt_len; + } return regs_len; } @@ -8341,6 +8820,10 @@ int qed_dbg_feature_size(struct qed_dev *cdev, enum qed_dbg_features feature) if (rc != DBG_STATUS_OK) buf_size_dwords = 0; + /* Feature will not be dumped if it exceeds maximum size */ + if (buf_size_dwords > MAX_DBG_FEATURE_SIZE_DWORDS) + buf_size_dwords = 0; + qed_ptt_release(p_hwfn, p_ptt); qed_feature->buf_size = buf_size_dwords * sizeof(u32); return qed_feature->buf_size; diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.h b/drivers/net/ethernet/qlogic/qed/qed_debug.h index e47e0e8d75b0..edf99d296bd1 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_debug.h +++ b/drivers/net/ethernet/qlogic/qed/qed_debug.h @@ -14,11 +14,13 @@ enum qed_dbg_features { DBG_FEATURE_IGU_FIFO, DBG_FEATURE_PROTECTION_OVERRIDE, DBG_FEATURE_FW_ASSERTS, + DBG_FEATURE_ILT, DBG_FEATURE_NUM }; /* Forward Declaration */ struct qed_dev; +struct qed_hwfn; int qed_dbg_grc(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes); int qed_dbg_grc_size(struct qed_dev *cdev); @@ -37,6 +39,8 @@ int qed_dbg_protection_override_size(struct qed_dev *cdev); int qed_dbg_fw_asserts(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes); int qed_dbg_fw_asserts_size(struct qed_dev *cdev); +int qed_dbg_ilt(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes); +int qed_dbg_ilt_size(struct qed_dev *cdev); int qed_dbg_mcp_trace(struct qed_dev *cdev, void *buffer, u32 *num_dumped_bytes); int qed_dbg_mcp_trace_size(struct qed_dev *cdev); diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index 67c048415d3b..e7e3f7b31ee0 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -2575,6 +2575,11 @@ enum dbg_grc_params { DBG_GRC_PARAM_DUMP_PHY, DBG_GRC_PARAM_NO_MCP, DBG_GRC_PARAM_NO_FW_VER, + DBG_GRC_PARAM_RESERVED3, + DBG_GRC_PARAM_DUMP_MCP_HW_DUMP, + DBG_GRC_PARAM_DUMP_ILT_CDUC, + DBG_GRC_PARAM_DUMP_ILT_CDUT, + DBG_GRC_PARAM_DUMP_CAU_EXT, MAX_DBG_GRC_PARAMS }; @@ -2695,6 +2700,19 @@ struct dbg_tools_data { u32 num_regs_read; }; +/* ILT Clients */ +enum ilt_clients { + ILT_CLI_CDUC, + ILT_CLI_CDUT, + ILT_CLI_QM, + ILT_CLI_TM, + ILT_CLI_SRC, + ILT_CLI_TSDM, + ILT_CLI_RGFS, + ILT_CLI_TGFS, + MAX_ILT_CLIENTS +}; + /********************************/ /* HSI Init Functions constants */ /********************************/ diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c index af981ff5595e..280527cc0578 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c +++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c @@ -3167,6 +3167,9 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn, case QED_NVM_IMAGE_FCOE_CFG: type = NVM_TYPE_FCOE_CFG; break; + case QED_NVM_IMAGE_MDUMP: + type = NVM_TYPE_MDUMP; + break; case QED_NVM_IMAGE_NVM_CFG1: type = NVM_TYPE_NVM_CFG1; break; diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h index 9bcb2f419004..1b27c22d39af 100644 --- a/include/linux/qed/qed_if.h +++ b/include/linux/qed/qed_if.h @@ -159,6 +159,7 @@ struct qed_dcbx_get { enum qed_nvm_images { QED_NVM_IMAGE_ISCSI_CFG, QED_NVM_IMAGE_FCOE_CFG, + QED_NVM_IMAGE_MDUMP, QED_NVM_IMAGE_NVM_CFG1, QED_NVM_IMAGE_DEFAULT_CFG, QED_NVM_IMAGE_NVM_META, -- cgit v1.2.3-71-gd317 From 2d22bc8354b15abe413dff76cfe0f7aeb88ef9aa Mon Sep 17 00:00:00 2001 From: Michal Kalderon Date: Mon, 27 Jan 2020 15:26:19 +0200 Subject: qed: FW 8.42.2.0 debug features Add to debug dump more information on the platform it was collected from (pci func, path id). Provide human readable reg fifo erros. Removed static debug arrays from HSI Functions, and move them to the hwfn. Some structures were slightly changed (removing reserved chip id for example) which lead to many long initializations being modified with one parameter less during initialization. This leads to some long diffs that don't really change anything. Signed-off-by: Ariel Elior Signed-off-by: Michal Kalderon Signed-off-by: David S. Miller --- drivers/net/ethernet/qlogic/qed/qed.h | 2 + drivers/net/ethernet/qlogic/qed/qed_debug.c | 3392 +++++++++++---------------- drivers/net/ethernet/qlogic/qed/qed_dev.c | 2 +- drivers/net/ethernet/qlogic/qed/qed_hsi.h | 661 ++---- drivers/net/ethernet/qlogic/qed/qed_main.c | 2 +- include/linux/qed/qed_if.h | 9 + 6 files changed, 1530 insertions(+), 2538 deletions(-) (limited to 'include') diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index a6dc40681a32..fa41bf08a589 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -666,6 +666,7 @@ struct qed_hwfn { struct dbg_tools_data dbg_info; void *dbg_user_info; + struct virt_mem_desc dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE]; /* PWM region specific data */ u16 wid_count; @@ -877,6 +878,7 @@ struct qed_dev { struct qed_cb_ll2_info *ll2; u8 ll2_mac_address[ETH_ALEN]; #endif + struct qed_dbg_feature dbg_features[DBG_FEATURE_NUM]; bool disable_ilt_dump; DECLARE_HASHTABLE(connections, 10); const struct firmware *firmware; diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c index 17a9b30e6c62..f4eebaabb6d0 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_debug.c +++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c @@ -23,27 +23,28 @@ enum mem_groups { MEM_GROUP_BRB_RAM, MEM_GROUP_BRB_MEM, MEM_GROUP_PRS_MEM, + MEM_GROUP_SDM_MEM, + MEM_GROUP_PBUF, MEM_GROUP_IOR, + MEM_GROUP_RAM, MEM_GROUP_BTB_RAM, + MEM_GROUP_RDIF_CTX, + MEM_GROUP_TDIF_CTX, + MEM_GROUP_CFC_MEM, MEM_GROUP_CONN_CFC_MEM, - MEM_GROUP_TASK_CFC_MEM, MEM_GROUP_CAU_PI, MEM_GROUP_CAU_MEM, + MEM_GROUP_CAU_MEM_EXT, MEM_GROUP_PXP_ILT, - MEM_GROUP_TM_MEM, - MEM_GROUP_SDM_MEM, - MEM_GROUP_PBUF, - MEM_GROUP_RAM, MEM_GROUP_MULD_MEM, MEM_GROUP_BTB_MEM, - MEM_GROUP_RDIF_CTX, - MEM_GROUP_TDIF_CTX, - MEM_GROUP_CFC_MEM, MEM_GROUP_IGU_MEM, MEM_GROUP_IGU_MSIX, MEM_GROUP_CAU_SB, MEM_GROUP_BMB_RAM, MEM_GROUP_BMB_MEM, + MEM_GROUP_TM_MEM, + MEM_GROUP_TASK_CFC_MEM, MEM_GROUPS_NUM }; @@ -57,27 +58,28 @@ static const char * const s_mem_group_names[] = { "BRB_RAM", "BRB_MEM", "PRS_MEM", + "SDM_MEM", + "PBUF", "IOR", + "RAM", "BTB_RAM", + "RDIF_CTX", + "TDIF_CTX", + "CFC_MEM", "CONN_CFC_MEM", - "TASK_CFC_MEM", "CAU_PI", "CAU_MEM", + "CAU_MEM_EXT", "PXP_ILT", - "TM_MEM", - "SDM_MEM", - "PBUF", - "RAM", "MULD_MEM", "BTB_MEM", - "RDIF_CTX", - "TDIF_CTX", - "CFC_MEM", "IGU_MEM", "IGU_MSIX", "CAU_SB", "BMB_RAM", "BMB_MEM", + "TM_MEM", + "TASK_CFC_MEM", }; /* Idle check conditions */ @@ -171,14 +173,38 @@ static u32(*cond_arr[]) (const u32 *r, const u32 *imm) = { cond13, }; +#define NUM_PHYS_BLOCKS 84 + +#define NUM_DBG_RESET_REGS 8 + /******************************* Data Types **********************************/ -enum platform_ids { - PLATFORM_ASIC, +enum hw_types { + HW_TYPE_ASIC, PLATFORM_RESERVED, PLATFORM_RESERVED2, PLATFORM_RESERVED3, - MAX_PLATFORM_IDS + PLATFORM_RESERVED4, + MAX_HW_TYPES +}; + +/* CM context types */ +enum cm_ctx_types { + CM_CTX_CONN_AG, + CM_CTX_CONN_ST, + CM_CTX_TASK_AG, + CM_CTX_TASK_ST, + NUM_CM_CTX_TYPES +}; + +/* Debug bus frame modes */ +enum dbg_bus_frame_modes { + DBG_BUS_FRAME_MODE_4ST = 0, /* 4 Storm dwords (no HW) */ + DBG_BUS_FRAME_MODE_2ST_2HW = 1, /* 2 Storm dwords, 2 HW dwords */ + DBG_BUS_FRAME_MODE_1ST_3HW = 2, /* 1 Storm dwords, 3 HW dwords */ + DBG_BUS_FRAME_MODE_4HW = 3, /* 4 HW dwords (no Storms) */ + DBG_BUS_FRAME_MODE_8HW = 4, /* 8 HW dwords (no Storms) */ + DBG_BUS_NUM_FRAME_MODES }; /* Chip constant definitions */ @@ -187,20 +213,26 @@ struct chip_defs { u32 num_ilt_pages; }; -/* Platform constant definitions */ -struct platform_defs { +/* HW type constant definitions */ +struct hw_type_defs { const char *name; u32 delay_factor; u32 dmae_thresh; u32 log_thresh; }; +/* RBC reset definitions */ +struct rbc_reset_defs { + u32 reset_reg_addr; + u32 reset_val[MAX_CHIP_IDS]; +}; + /* Storm constant definitions. * Addresses are in bytes, sizes are in quad-regs. */ struct storm_defs { char letter; - enum block_id block_id; + enum block_id sem_block_id; enum dbg_bus_clients dbg_client_id[MAX_CHIP_IDS]; bool has_vfc; u32 sem_fast_mem_addr; @@ -209,47 +241,26 @@ struct storm_defs { u32 sem_slow_mode_addr; u32 sem_slow_mode1_conf_addr; u32 sem_sync_dbg_empty_addr; - u32 sem_slow_dbg_empty_addr; + u32 sem_gpre_vect_addr; u32 cm_ctx_wr_addr; - u32 cm_conn_ag_ctx_lid_size; - u32 cm_conn_ag_ctx_rd_addr; - u32 cm_conn_st_ctx_lid_size; - u32 cm_conn_st_ctx_rd_addr; - u32 cm_task_ag_ctx_lid_size; - u32 cm_task_ag_ctx_rd_addr; - u32 cm_task_st_ctx_lid_size; - u32 cm_task_st_ctx_rd_addr; + u32 cm_ctx_rd_addr[NUM_CM_CTX_TYPES]; + u32 cm_ctx_lid_sizes[MAX_CHIP_IDS][NUM_CM_CTX_TYPES]; }; -/* Block constant definitions */ -struct block_defs { - const char *name; - bool exists[MAX_CHIP_IDS]; - bool associated_to_storm; - - /* Valid only if associated_to_storm is true */ - u32 storm_id; - enum dbg_bus_clients dbg_client_id[MAX_CHIP_IDS]; - u32 dbg_select_addr; - u32 dbg_enable_addr; - u32 dbg_shift_addr; - u32 dbg_force_valid_addr; - u32 dbg_force_frame_addr; - bool has_reset_bit; - - /* If true, block is taken out of reset before dump */ - bool unreset; - enum dbg_reset_regs reset_reg; - - /* Bit offset in reset register */ - u8 reset_bit_offset; +/* Debug Bus Constraint operation constant definitions */ +struct dbg_bus_constraint_op_defs { + u8 hw_op_val; + bool is_cyclic; }; -/* Reset register definitions */ -struct reset_reg_defs { - u32 addr; +/* Storm Mode definitions */ +struct storm_mode_defs { + const char *name; + bool is_fast_dbg; + u8 id_in_hw; + u32 src_disable_reg_addr; + u32 src_enable_val; bool exists[MAX_CHIP_IDS]; - u32 unreset_val[MAX_CHIP_IDS]; }; struct grc_param_defs { @@ -259,7 +270,7 @@ struct grc_param_defs { bool is_preset; bool is_persistent; u32 exclude_all_preset_val; - u32 crash_preset_val; + u32 crash_preset_val[MAX_CHIP_IDS]; }; /* Address is in 128b units. Width is in bits. */ @@ -316,15 +327,7 @@ struct split_type_defs { /******************************** Constants **********************************/ -#define MAX_LCIDS 320 -#define MAX_LTIDS 320 - -#define NUM_IOR_SETS 2 -#define IORS_PER_SET 176 -#define IOR_SET_OFFSET(set_id) ((set_id) * 256) - #define BYTES_IN_DWORD sizeof(u32) - /* In the macros below, size and offset are specified in bits */ #define CEIL_DWORDS(size) DIV_ROUND_UP(size, 32) #define FIELD_BIT_OFFSET(type, field) type ## _ ## field ## _ ## OFFSET @@ -350,20 +353,17 @@ struct split_type_defs { qed_wr(dev, ptt, addr, (arr)[i]); \ } while (0) -#define ARR_REG_RD(dev, ptt, addr, arr, arr_size) \ - do { \ - for (i = 0; i < (arr_size); i++) \ - (arr)[i] = qed_rd(dev, ptt, addr); \ - } while (0) - #define DWORDS_TO_BYTES(dwords) ((dwords) * BYTES_IN_DWORD) #define BYTES_TO_DWORDS(bytes) ((bytes) / BYTES_IN_DWORD) -/* Extra lines include a signature line + optional latency events line */ -#define NUM_EXTRA_DBG_LINES(block_desc) \ - (1 + ((block_desc)->has_latency_events ? 1 : 0)) -#define NUM_DBG_LINES(block_desc) \ - ((block_desc)->num_of_lines + NUM_EXTRA_DBG_LINES(block_desc)) +/* extra lines include a signature line + optional latency events line */ +#define NUM_EXTRA_DBG_LINES(block) \ + (GET_FIELD((block)->flags, DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS) ? 2 : 1) +#define NUM_DBG_LINES(block) \ + ((block)->num_of_dbg_bus_lines + NUM_EXTRA_DBG_LINES(block)) + +#define USE_DMAE true +#define PROTECT_WIDE_BUS true #define RAM_LINES_TO_DWORDS(lines) ((lines) * 2) #define RAM_LINES_TO_BYTES(lines) \ @@ -430,7 +430,9 @@ struct split_type_defs { #define STATIC_DEBUG_LINE_DWORDS 9 -#define NUM_COMMON_GLOBAL_PARAMS 8 +#define NUM_COMMON_GLOBAL_PARAMS 9 + +#define MAX_RECURSION_DEPTH 10 #define FW_IMG_MAIN 1 @@ -454,19 +456,13 @@ struct split_type_defs { (MCP_REG_SCRATCH + \ offsetof(struct static_init, sections[SPAD_SECTION_TRACE])) +#define MAX_SW_PLTAFORM_STR_SIZE 64 + #define EMPTY_FW_VERSION_STR "???_???_???_???" #define EMPTY_FW_IMAGE_STR "???????????????" /***************************** Constant Arrays *******************************/ -struct dbg_array { - const u32 *ptr; - u32 size_in_dwords; -}; - -/* Debug arrays */ -static struct dbg_array s_dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE] = { {NULL} }; - /* Chip constant definitions array */ static struct chip_defs s_chip_defs[MAX_CHIP_IDS] = { {"bb", PSWRQ2_REG_ILT_MEMORY_SIZE_BB / 2}, @@ -477,1030 +473,104 @@ static struct chip_defs s_chip_defs[MAX_CHIP_IDS] = { static struct storm_defs s_storm_defs[] = { /* Tstorm */ {'T', BLOCK_TSEM, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT, - DBG_BUS_CLIENT_RBCT}, true, - TSEM_REG_FAST_MEMORY, - TSEM_REG_DBG_FRAME_MODE_BB_K2, TSEM_REG_SLOW_DBG_ACTIVE_BB_K2, - TSEM_REG_SLOW_DBG_MODE_BB_K2, TSEM_REG_DBG_MODE1_CFG_BB_K2, - TSEM_REG_SYNC_DBG_EMPTY, TSEM_REG_SLOW_DBG_EMPTY_BB_K2, - TCM_REG_CTX_RBC_ACCS, - 4, TCM_REG_AGG_CON_CTX, - 16, TCM_REG_SM_CON_CTX, - 2, TCM_REG_AGG_TASK_CTX, - 4, TCM_REG_SM_TASK_CTX}, + {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT}, + true, + TSEM_REG_FAST_MEMORY, + TSEM_REG_DBG_FRAME_MODE_BB_K2, TSEM_REG_SLOW_DBG_ACTIVE_BB_K2, + TSEM_REG_SLOW_DBG_MODE_BB_K2, TSEM_REG_DBG_MODE1_CFG_BB_K2, + TSEM_REG_SYNC_DBG_EMPTY, TSEM_REG_DBG_GPRE_VECT, + TCM_REG_CTX_RBC_ACCS, + {TCM_REG_AGG_CON_CTX, TCM_REG_SM_CON_CTX, TCM_REG_AGG_TASK_CTX, + TCM_REG_SM_TASK_CTX}, + {{4, 16, 2, 4}, {4, 16, 2, 4}} /* {bb} {k2} */ + }, /* Mstorm */ {'M', BLOCK_MSEM, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, - DBG_BUS_CLIENT_RBCM}, false, - MSEM_REG_FAST_MEMORY, - MSEM_REG_DBG_FRAME_MODE_BB_K2, MSEM_REG_SLOW_DBG_ACTIVE_BB_K2, - MSEM_REG_SLOW_DBG_MODE_BB_K2, MSEM_REG_DBG_MODE1_CFG_BB_K2, - MSEM_REG_SYNC_DBG_EMPTY, MSEM_REG_SLOW_DBG_EMPTY_BB_K2, - MCM_REG_CTX_RBC_ACCS, - 1, MCM_REG_AGG_CON_CTX, - 10, MCM_REG_SM_CON_CTX, - 2, MCM_REG_AGG_TASK_CTX, - 7, MCM_REG_SM_TASK_CTX}, + {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM}, + false, + MSEM_REG_FAST_MEMORY, + MSEM_REG_DBG_FRAME_MODE_BB_K2, + MSEM_REG_SLOW_DBG_ACTIVE_BB_K2, + MSEM_REG_SLOW_DBG_MODE_BB_K2, + MSEM_REG_DBG_MODE1_CFG_BB_K2, + MSEM_REG_SYNC_DBG_EMPTY, + MSEM_REG_DBG_GPRE_VECT, + MCM_REG_CTX_RBC_ACCS, + {MCM_REG_AGG_CON_CTX, MCM_REG_SM_CON_CTX, MCM_REG_AGG_TASK_CTX, + MCM_REG_SM_TASK_CTX }, + {{1, 10, 2, 7}, {1, 10, 2, 7}} /* {bb} {k2}*/ + }, /* Ustorm */ {'U', BLOCK_USEM, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, - DBG_BUS_CLIENT_RBCU}, false, - USEM_REG_FAST_MEMORY, - USEM_REG_DBG_FRAME_MODE_BB_K2, USEM_REG_SLOW_DBG_ACTIVE_BB_K2, - USEM_REG_SLOW_DBG_MODE_BB_K2, USEM_REG_DBG_MODE1_CFG_BB_K2, - USEM_REG_SYNC_DBG_EMPTY, USEM_REG_SLOW_DBG_EMPTY_BB_K2, - UCM_REG_CTX_RBC_ACCS, - 2, UCM_REG_AGG_CON_CTX, - 13, UCM_REG_SM_CON_CTX, - 3, UCM_REG_AGG_TASK_CTX, - 3, UCM_REG_SM_TASK_CTX}, + {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU}, + false, + USEM_REG_FAST_MEMORY, + USEM_REG_DBG_FRAME_MODE_BB_K2, + USEM_REG_SLOW_DBG_ACTIVE_BB_K2, + USEM_REG_SLOW_DBG_MODE_BB_K2, + USEM_REG_DBG_MODE1_CFG_BB_K2, + USEM_REG_SYNC_DBG_EMPTY, + USEM_REG_DBG_GPRE_VECT, + UCM_REG_CTX_RBC_ACCS, + {UCM_REG_AGG_CON_CTX, UCM_REG_SM_CON_CTX, UCM_REG_AGG_TASK_CTX, + UCM_REG_SM_TASK_CTX}, + {{2, 13, 3, 3}, {2, 13, 3, 3}} /* {bb} {k2} */ + }, /* Xstorm */ {'X', BLOCK_XSEM, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX, - DBG_BUS_CLIENT_RBCX}, false, - XSEM_REG_FAST_MEMORY, - XSEM_REG_DBG_FRAME_MODE_BB_K2, XSEM_REG_SLOW_DBG_ACTIVE_BB_K2, - XSEM_REG_SLOW_DBG_MODE_BB_K2, XSEM_REG_DBG_MODE1_CFG_BB_K2, - XSEM_REG_SYNC_DBG_EMPTY, XSEM_REG_SLOW_DBG_EMPTY_BB_K2, - XCM_REG_CTX_RBC_ACCS, - 9, XCM_REG_AGG_CON_CTX, - 15, XCM_REG_SM_CON_CTX, - 0, 0, - 0, 0}, + {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX}, + false, + XSEM_REG_FAST_MEMORY, + XSEM_REG_DBG_FRAME_MODE_BB_K2, + XSEM_REG_SLOW_DBG_ACTIVE_BB_K2, + XSEM_REG_SLOW_DBG_MODE_BB_K2, + XSEM_REG_DBG_MODE1_CFG_BB_K2, + XSEM_REG_SYNC_DBG_EMPTY, + XSEM_REG_DBG_GPRE_VECT, + XCM_REG_CTX_RBC_ACCS, + {XCM_REG_AGG_CON_CTX, XCM_REG_SM_CON_CTX, 0, 0}, + {{9, 15, 0, 0}, {9, 15, 0, 0}} /* {bb} {k2} */ + }, /* Ystorm */ {'Y', BLOCK_YSEM, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY, - DBG_BUS_CLIENT_RBCY}, false, - YSEM_REG_FAST_MEMORY, - YSEM_REG_DBG_FRAME_MODE_BB_K2, YSEM_REG_SLOW_DBG_ACTIVE_BB_K2, - YSEM_REG_SLOW_DBG_MODE_BB_K2, YSEM_REG_DBG_MODE1_CFG_BB_K2, - YSEM_REG_SYNC_DBG_EMPTY, TSEM_REG_SLOW_DBG_EMPTY_BB_K2, - YCM_REG_CTX_RBC_ACCS, - 2, YCM_REG_AGG_CON_CTX, - 3, YCM_REG_SM_CON_CTX, - 2, YCM_REG_AGG_TASK_CTX, - 12, YCM_REG_SM_TASK_CTX}, + {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY}, + false, + YSEM_REG_FAST_MEMORY, + YSEM_REG_DBG_FRAME_MODE_BB_K2, + YSEM_REG_SLOW_DBG_ACTIVE_BB_K2, + YSEM_REG_SLOW_DBG_MODE_BB_K2, + YSEM_REG_DBG_MODE1_CFG_BB_K2, + YSEM_REG_SYNC_DBG_EMPTY, + YSEM_REG_DBG_GPRE_VECT, + YCM_REG_CTX_RBC_ACCS, + {YCM_REG_AGG_CON_CTX, YCM_REG_SM_CON_CTX, YCM_REG_AGG_TASK_CTX, + YCM_REG_SM_TASK_CTX}, + {{2, 3, 2, 12}, {2, 3, 2, 12}} /* {bb} {k2} */ + }, /* Pstorm */ {'P', BLOCK_PSEM, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, - DBG_BUS_CLIENT_RBCS}, true, - PSEM_REG_FAST_MEMORY, - PSEM_REG_DBG_FRAME_MODE_BB_K2, PSEM_REG_SLOW_DBG_ACTIVE_BB_K2, - PSEM_REG_SLOW_DBG_MODE_BB_K2, PSEM_REG_DBG_MODE1_CFG_BB_K2, - PSEM_REG_SYNC_DBG_EMPTY, PSEM_REG_SLOW_DBG_EMPTY_BB_K2, - PCM_REG_CTX_RBC_ACCS, - 0, 0, - 10, PCM_REG_SM_CON_CTX, - 0, 0, - 0, 0} -}; - -/* Block definitions array */ - -static struct block_defs block_grc_defs = { - "grc", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCN, DBG_BUS_CLIENT_RBCN, DBG_BUS_CLIENT_RBCN}, - GRC_REG_DBG_SELECT, GRC_REG_DBG_DWORD_ENABLE, - GRC_REG_DBG_SHIFT, GRC_REG_DBG_FORCE_VALID, - GRC_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_UA, 1 -}; - -static struct block_defs block_miscs_defs = { - "miscs", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_misc_defs = { - "misc", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_dbu_defs = { - "dbu", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_pglue_b_defs = { - "pglue_b", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCH, DBG_BUS_CLIENT_RBCH, DBG_BUS_CLIENT_RBCH}, - PGLUE_B_REG_DBG_SELECT, PGLUE_B_REG_DBG_DWORD_ENABLE, - PGLUE_B_REG_DBG_SHIFT, PGLUE_B_REG_DBG_FORCE_VALID, - PGLUE_B_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISCS_PL_HV, 1 -}; - -static struct block_defs block_cnig_defs = { - "cnig", - {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCW, - DBG_BUS_CLIENT_RBCW}, - CNIG_REG_DBG_SELECT_K2_E5, CNIG_REG_DBG_DWORD_ENABLE_K2_E5, - CNIG_REG_DBG_SHIFT_K2_E5, CNIG_REG_DBG_FORCE_VALID_K2_E5, - CNIG_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV, 0 -}; - -static struct block_defs block_cpmu_defs = { - "cpmu", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISCS_PL_HV, 8 -}; - -static struct block_defs block_ncsi_defs = { - "ncsi", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ}, - NCSI_REG_DBG_SELECT, NCSI_REG_DBG_DWORD_ENABLE, - NCSI_REG_DBG_SHIFT, NCSI_REG_DBG_FORCE_VALID, - NCSI_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISCS_PL_HV, 5 -}; - -static struct block_defs block_opte_defs = { - "opte", {true, true, false}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISCS_PL_HV, 4 -}; - -static struct block_defs block_bmb_defs = { - "bmb", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCB, DBG_BUS_CLIENT_RBCB}, - BMB_REG_DBG_SELECT, BMB_REG_DBG_DWORD_ENABLE, - BMB_REG_DBG_SHIFT, BMB_REG_DBG_FORCE_VALID, - BMB_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISCS_PL_UA, 7 -}; - -static struct block_defs block_pcie_defs = { - "pcie", - {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCH, - DBG_BUS_CLIENT_RBCH}, - PCIE_REG_DBG_COMMON_SELECT_K2_E5, - PCIE_REG_DBG_COMMON_DWORD_ENABLE_K2_E5, - PCIE_REG_DBG_COMMON_SHIFT_K2_E5, - PCIE_REG_DBG_COMMON_FORCE_VALID_K2_E5, - PCIE_REG_DBG_COMMON_FORCE_FRAME_K2_E5, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_mcp_defs = { - "mcp", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_mcp2_defs = { - "mcp2", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ}, - MCP2_REG_DBG_SELECT, MCP2_REG_DBG_DWORD_ENABLE, - MCP2_REG_DBG_SHIFT, MCP2_REG_DBG_FORCE_VALID, - MCP2_REG_DBG_FORCE_FRAME, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_pswhst_defs = { - "pswhst", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWHST_REG_DBG_SELECT, PSWHST_REG_DBG_DWORD_ENABLE, - PSWHST_REG_DBG_SHIFT, PSWHST_REG_DBG_FORCE_VALID, - PSWHST_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 0 -}; - -static struct block_defs block_pswhst2_defs = { - "pswhst2", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWHST2_REG_DBG_SELECT, PSWHST2_REG_DBG_DWORD_ENABLE, - PSWHST2_REG_DBG_SHIFT, PSWHST2_REG_DBG_FORCE_VALID, - PSWHST2_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 0 -}; - -static struct block_defs block_pswrd_defs = { - "pswrd", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWRD_REG_DBG_SELECT, PSWRD_REG_DBG_DWORD_ENABLE, - PSWRD_REG_DBG_SHIFT, PSWRD_REG_DBG_FORCE_VALID, - PSWRD_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 2 -}; - -static struct block_defs block_pswrd2_defs = { - "pswrd2", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWRD2_REG_DBG_SELECT, PSWRD2_REG_DBG_DWORD_ENABLE, - PSWRD2_REG_DBG_SHIFT, PSWRD2_REG_DBG_FORCE_VALID, - PSWRD2_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 2 -}; - -static struct block_defs block_pswwr_defs = { - "pswwr", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWWR_REG_DBG_SELECT, PSWWR_REG_DBG_DWORD_ENABLE, - PSWWR_REG_DBG_SHIFT, PSWWR_REG_DBG_FORCE_VALID, - PSWWR_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 3 -}; - -static struct block_defs block_pswwr2_defs = { - "pswwr2", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISC_PL_HV, 3 -}; - -static struct block_defs block_pswrq_defs = { - "pswrq", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWRQ_REG_DBG_SELECT, PSWRQ_REG_DBG_DWORD_ENABLE, - PSWRQ_REG_DBG_SHIFT, PSWRQ_REG_DBG_FORCE_VALID, - PSWRQ_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 1 -}; - -static struct block_defs block_pswrq2_defs = { - "pswrq2", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PSWRQ2_REG_DBG_SELECT, PSWRQ2_REG_DBG_DWORD_ENABLE, - PSWRQ2_REG_DBG_SHIFT, PSWRQ2_REG_DBG_FORCE_VALID, - PSWRQ2_REG_DBG_FORCE_FRAME, - true, false, DBG_RESET_REG_MISC_PL_HV, 1 -}; - -static struct block_defs block_pglcs_defs = { - "pglcs", - {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCH, - DBG_BUS_CLIENT_RBCH}, - PGLCS_REG_DBG_SELECT_K2_E5, PGLCS_REG_DBG_DWORD_ENABLE_K2_E5, - PGLCS_REG_DBG_SHIFT_K2_E5, PGLCS_REG_DBG_FORCE_VALID_K2_E5, - PGLCS_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV, 2 -}; - -static struct block_defs block_ptu_defs = { - "ptu", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - PTU_REG_DBG_SELECT, PTU_REG_DBG_DWORD_ENABLE, - PTU_REG_DBG_SHIFT, PTU_REG_DBG_FORCE_VALID, - PTU_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 20 -}; - -static struct block_defs block_dmae_defs = { - "dmae", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - DMAE_REG_DBG_SELECT, DMAE_REG_DBG_DWORD_ENABLE, - DMAE_REG_DBG_SHIFT, DMAE_REG_DBG_FORCE_VALID, - DMAE_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 28 -}; - -static struct block_defs block_tcm_defs = { - "tcm", - {true, true, true}, true, DBG_TSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT}, - TCM_REG_DBG_SELECT, TCM_REG_DBG_DWORD_ENABLE, - TCM_REG_DBG_SHIFT, TCM_REG_DBG_FORCE_VALID, - TCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 5 -}; - -static struct block_defs block_mcm_defs = { - "mcm", - {true, true, true}, true, DBG_MSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - MCM_REG_DBG_SELECT, MCM_REG_DBG_DWORD_ENABLE, - MCM_REG_DBG_SHIFT, MCM_REG_DBG_FORCE_VALID, - MCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 3 -}; - -static struct block_defs block_ucm_defs = { - "ucm", - {true, true, true}, true, DBG_USTORM_ID, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU}, - UCM_REG_DBG_SELECT, UCM_REG_DBG_DWORD_ENABLE, - UCM_REG_DBG_SHIFT, UCM_REG_DBG_FORCE_VALID, - UCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 8 -}; - -static struct block_defs block_xcm_defs = { - "xcm", - {true, true, true}, true, DBG_XSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX}, - XCM_REG_DBG_SELECT, XCM_REG_DBG_DWORD_ENABLE, - XCM_REG_DBG_SHIFT, XCM_REG_DBG_FORCE_VALID, - XCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 19 -}; - -static struct block_defs block_ycm_defs = { - "ycm", - {true, true, true}, true, DBG_YSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY, DBG_BUS_CLIENT_RBCY}, - YCM_REG_DBG_SELECT, YCM_REG_DBG_DWORD_ENABLE, - YCM_REG_DBG_SHIFT, YCM_REG_DBG_FORCE_VALID, - YCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 5 -}; - -static struct block_defs block_pcm_defs = { - "pcm", - {true, true, true}, true, DBG_PSTORM_ID, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, - PCM_REG_DBG_SELECT, PCM_REG_DBG_DWORD_ENABLE, - PCM_REG_DBG_SHIFT, PCM_REG_DBG_FORCE_VALID, - PCM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 4 -}; - -static struct block_defs block_qm_defs = { - "qm", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCQ, DBG_BUS_CLIENT_RBCQ}, - QM_REG_DBG_SELECT, QM_REG_DBG_DWORD_ENABLE, - QM_REG_DBG_SHIFT, QM_REG_DBG_FORCE_VALID, - QM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 16 -}; - -static struct block_defs block_tm_defs = { - "tm", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, - TM_REG_DBG_SELECT, TM_REG_DBG_DWORD_ENABLE, - TM_REG_DBG_SHIFT, TM_REG_DBG_FORCE_VALID, - TM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 17 -}; - -static struct block_defs block_dorq_defs = { - "dorq", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY, DBG_BUS_CLIENT_RBCY}, - DORQ_REG_DBG_SELECT, DORQ_REG_DBG_DWORD_ENABLE, - DORQ_REG_DBG_SHIFT, DORQ_REG_DBG_FORCE_VALID, - DORQ_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 18 -}; - -static struct block_defs block_brb_defs = { - "brb", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCR, DBG_BUS_CLIENT_RBCR, DBG_BUS_CLIENT_RBCR}, - BRB_REG_DBG_SELECT, BRB_REG_DBG_DWORD_ENABLE, - BRB_REG_DBG_SHIFT, BRB_REG_DBG_FORCE_VALID, - BRB_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 0 -}; - -static struct block_defs block_src_defs = { - "src", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF}, - SRC_REG_DBG_SELECT, SRC_REG_DBG_DWORD_ENABLE, - SRC_REG_DBG_SHIFT, SRC_REG_DBG_FORCE_VALID, - SRC_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 2 -}; - -static struct block_defs block_prs_defs = { - "prs", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCR, DBG_BUS_CLIENT_RBCR, DBG_BUS_CLIENT_RBCR}, - PRS_REG_DBG_SELECT, PRS_REG_DBG_DWORD_ENABLE, - PRS_REG_DBG_SHIFT, PRS_REG_DBG_FORCE_VALID, - PRS_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 1 -}; - -static struct block_defs block_tsdm_defs = { - "tsdm", - {true, true, true}, true, DBG_TSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT}, - TSDM_REG_DBG_SELECT, TSDM_REG_DBG_DWORD_ENABLE, - TSDM_REG_DBG_SHIFT, TSDM_REG_DBG_FORCE_VALID, - TSDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 3 -}; - -static struct block_defs block_msdm_defs = { - "msdm", - {true, true, true}, true, DBG_MSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - MSDM_REG_DBG_SELECT, MSDM_REG_DBG_DWORD_ENABLE, - MSDM_REG_DBG_SHIFT, MSDM_REG_DBG_FORCE_VALID, - MSDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 6 -}; - -static struct block_defs block_usdm_defs = { - "usdm", - {true, true, true}, true, DBG_USTORM_ID, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU}, - USDM_REG_DBG_SELECT, USDM_REG_DBG_DWORD_ENABLE, - USDM_REG_DBG_SHIFT, USDM_REG_DBG_FORCE_VALID, - USDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 7 -}; - -static struct block_defs block_xsdm_defs = { - "xsdm", - {true, true, true}, true, DBG_XSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX}, - XSDM_REG_DBG_SELECT, XSDM_REG_DBG_DWORD_ENABLE, - XSDM_REG_DBG_SHIFT, XSDM_REG_DBG_FORCE_VALID, - XSDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 20 -}; - -static struct block_defs block_ysdm_defs = { - "ysdm", - {true, true, true}, true, DBG_YSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY, DBG_BUS_CLIENT_RBCY}, - YSDM_REG_DBG_SELECT, YSDM_REG_DBG_DWORD_ENABLE, - YSDM_REG_DBG_SHIFT, YSDM_REG_DBG_FORCE_VALID, - YSDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 8 -}; - -static struct block_defs block_psdm_defs = { - "psdm", - {true, true, true}, true, DBG_PSTORM_ID, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, - PSDM_REG_DBG_SELECT, PSDM_REG_DBG_DWORD_ENABLE, - PSDM_REG_DBG_SHIFT, PSDM_REG_DBG_FORCE_VALID, - PSDM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 7 -}; - -static struct block_defs block_tsem_defs = { - "tsem", - {true, true, true}, true, DBG_TSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT}, - TSEM_REG_DBG_SELECT, TSEM_REG_DBG_DWORD_ENABLE, - TSEM_REG_DBG_SHIFT, TSEM_REG_DBG_FORCE_VALID, - TSEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 4 -}; - -static struct block_defs block_msem_defs = { - "msem", - {true, true, true}, true, DBG_MSTORM_ID, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - MSEM_REG_DBG_SELECT, MSEM_REG_DBG_DWORD_ENABLE, - MSEM_REG_DBG_SHIFT, MSEM_REG_DBG_FORCE_VALID, - MSEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 9 -}; - -static struct block_defs block_usem_defs = { - "usem", - {true, true, true}, true, DBG_USTORM_ID, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU}, - USEM_REG_DBG_SELECT, USEM_REG_DBG_DWORD_ENABLE, - USEM_REG_DBG_SHIFT, USEM_REG_DBG_FORCE_VALID, - USEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 9 -}; - -static struct block_defs block_xsem_defs = { - "xsem", - {true, true, true}, true, DBG_XSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX}, - XSEM_REG_DBG_SELECT, XSEM_REG_DBG_DWORD_ENABLE, - XSEM_REG_DBG_SHIFT, XSEM_REG_DBG_FORCE_VALID, - XSEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 21 -}; - -static struct block_defs block_ysem_defs = { - "ysem", - {true, true, true}, true, DBG_YSTORM_ID, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCY, DBG_BUS_CLIENT_RBCY}, - YSEM_REG_DBG_SELECT, YSEM_REG_DBG_DWORD_ENABLE, - YSEM_REG_DBG_SHIFT, YSEM_REG_DBG_FORCE_VALID, - YSEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 11 -}; - -static struct block_defs block_psem_defs = { - "psem", - {true, true, true}, true, DBG_PSTORM_ID, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, - PSEM_REG_DBG_SELECT, PSEM_REG_DBG_DWORD_ENABLE, - PSEM_REG_DBG_SHIFT, PSEM_REG_DBG_FORCE_VALID, - PSEM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 10 -}; - -static struct block_defs block_rss_defs = { - "rss", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCT}, - RSS_REG_DBG_SELECT, RSS_REG_DBG_DWORD_ENABLE, - RSS_REG_DBG_SHIFT, RSS_REG_DBG_FORCE_VALID, - RSS_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 18 -}; - -static struct block_defs block_tmld_defs = { - "tmld", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - TMLD_REG_DBG_SELECT, TMLD_REG_DBG_DWORD_ENABLE, - TMLD_REG_DBG_SHIFT, TMLD_REG_DBG_FORCE_VALID, - TMLD_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 13 -}; - -static struct block_defs block_muld_defs = { - "muld", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU}, - MULD_REG_DBG_SELECT, MULD_REG_DBG_DWORD_ENABLE, - MULD_REG_DBG_SHIFT, MULD_REG_DBG_FORCE_VALID, - MULD_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 14 -}; - -static struct block_defs block_yuld_defs = { - "yuld", - {true, true, false}, false, 0, - {DBG_BUS_CLIENT_RBCU, DBG_BUS_CLIENT_RBCU, - MAX_DBG_BUS_CLIENTS}, - YULD_REG_DBG_SELECT_BB_K2, YULD_REG_DBG_DWORD_ENABLE_BB_K2, - YULD_REG_DBG_SHIFT_BB_K2, YULD_REG_DBG_FORCE_VALID_BB_K2, - YULD_REG_DBG_FORCE_FRAME_BB_K2, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, - 15 -}; - -static struct block_defs block_xyld_defs = { - "xyld", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX, DBG_BUS_CLIENT_RBCX}, - XYLD_REG_DBG_SELECT, XYLD_REG_DBG_DWORD_ENABLE, - XYLD_REG_DBG_SHIFT, XYLD_REG_DBG_FORCE_VALID, - XYLD_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 12 -}; - -static struct block_defs block_ptld_defs = { - "ptld", - {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCT}, - PTLD_REG_DBG_SELECT_E5, PTLD_REG_DBG_DWORD_ENABLE_E5, - PTLD_REG_DBG_SHIFT_E5, PTLD_REG_DBG_FORCE_VALID_E5, - PTLD_REG_DBG_FORCE_FRAME_E5, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, - 28 -}; - -static struct block_defs block_ypld_defs = { - "ypld", - {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCS}, - YPLD_REG_DBG_SELECT_E5, YPLD_REG_DBG_DWORD_ENABLE_E5, - YPLD_REG_DBG_SHIFT_E5, YPLD_REG_DBG_FORCE_VALID_E5, - YPLD_REG_DBG_FORCE_FRAME_E5, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, - 27 -}; - -static struct block_defs block_prm_defs = { - "prm", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - PRM_REG_DBG_SELECT, PRM_REG_DBG_DWORD_ENABLE, - PRM_REG_DBG_SHIFT, PRM_REG_DBG_FORCE_VALID, - PRM_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 21 -}; - -static struct block_defs block_pbf_pb1_defs = { - "pbf_pb1", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCV, DBG_BUS_CLIENT_RBCV}, - PBF_PB1_REG_DBG_SELECT, PBF_PB1_REG_DBG_DWORD_ENABLE, - PBF_PB1_REG_DBG_SHIFT, PBF_PB1_REG_DBG_FORCE_VALID, - PBF_PB1_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, - 11 -}; - -static struct block_defs block_pbf_pb2_defs = { - "pbf_pb2", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCV, DBG_BUS_CLIENT_RBCV}, - PBF_PB2_REG_DBG_SELECT, PBF_PB2_REG_DBG_DWORD_ENABLE, - PBF_PB2_REG_DBG_SHIFT, PBF_PB2_REG_DBG_FORCE_VALID, - PBF_PB2_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, - 12 -}; - -static struct block_defs block_rpb_defs = { - "rpb", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - RPB_REG_DBG_SELECT, RPB_REG_DBG_DWORD_ENABLE, - RPB_REG_DBG_SHIFT, RPB_REG_DBG_FORCE_VALID, - RPB_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 13 -}; - -static struct block_defs block_btb_defs = { - "btb", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCR, DBG_BUS_CLIENT_RBCV, DBG_BUS_CLIENT_RBCV}, - BTB_REG_DBG_SELECT, BTB_REG_DBG_DWORD_ENABLE, - BTB_REG_DBG_SHIFT, BTB_REG_DBG_FORCE_VALID, - BTB_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 10 -}; - -static struct block_defs block_pbf_defs = { - "pbf", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCV, DBG_BUS_CLIENT_RBCV}, - PBF_REG_DBG_SELECT, PBF_REG_DBG_DWORD_ENABLE, - PBF_REG_DBG_SHIFT, PBF_REG_DBG_FORCE_VALID, - PBF_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 15 -}; - -static struct block_defs block_rdif_defs = { - "rdif", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCT, DBG_BUS_CLIENT_RBCM, DBG_BUS_CLIENT_RBCM}, - RDIF_REG_DBG_SELECT, RDIF_REG_DBG_DWORD_ENABLE, - RDIF_REG_DBG_SHIFT, RDIF_REG_DBG_FORCE_VALID, - RDIF_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 16 -}; - -static struct block_defs block_tdif_defs = { - "tdif", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, - TDIF_REG_DBG_SELECT, TDIF_REG_DBG_DWORD_ENABLE, - TDIF_REG_DBG_SHIFT, TDIF_REG_DBG_FORCE_VALID, - TDIF_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 17 -}; - -static struct block_defs block_cdu_defs = { - "cdu", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF}, - CDU_REG_DBG_SELECT, CDU_REG_DBG_DWORD_ENABLE, - CDU_REG_DBG_SHIFT, CDU_REG_DBG_FORCE_VALID, - CDU_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 23 -}; - -static struct block_defs block_ccfc_defs = { - "ccfc", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF}, - CCFC_REG_DBG_SELECT, CCFC_REG_DBG_DWORD_ENABLE, - CCFC_REG_DBG_SHIFT, CCFC_REG_DBG_FORCE_VALID, - CCFC_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 24 -}; - -static struct block_defs block_tcfc_defs = { - "tcfc", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF, DBG_BUS_CLIENT_RBCF}, - TCFC_REG_DBG_SELECT, TCFC_REG_DBG_DWORD_ENABLE, - TCFC_REG_DBG_SHIFT, TCFC_REG_DBG_FORCE_VALID, - TCFC_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 25 -}; - -static struct block_defs block_igu_defs = { - "igu", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - IGU_REG_DBG_SELECT, IGU_REG_DBG_DWORD_ENABLE, - IGU_REG_DBG_SHIFT, IGU_REG_DBG_FORCE_VALID, - IGU_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, 27 -}; - -static struct block_defs block_cau_defs = { - "cau", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP, DBG_BUS_CLIENT_RBCP}, - CAU_REG_DBG_SELECT, CAU_REG_DBG_DWORD_ENABLE, - CAU_REG_DBG_SHIFT, CAU_REG_DBG_FORCE_VALID, - CAU_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 19 -}; - -static struct block_defs block_rgfs_defs = { - "rgfs", {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 29 -}; - -static struct block_defs block_rgsrc_defs = { - "rgsrc", - {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCH}, - RGSRC_REG_DBG_SELECT_E5, RGSRC_REG_DBG_DWORD_ENABLE_E5, - RGSRC_REG_DBG_SHIFT_E5, RGSRC_REG_DBG_FORCE_VALID_E5, - RGSRC_REG_DBG_FORCE_FRAME_E5, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, - 30 -}; - -static struct block_defs block_tgfs_defs = { - "tgfs", {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, 30 -}; - -static struct block_defs block_tgsrc_defs = { - "tgsrc", - {false, false, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCV}, - TGSRC_REG_DBG_SELECT_E5, TGSRC_REG_DBG_DWORD_ENABLE_E5, - TGSRC_REG_DBG_SHIFT_E5, TGSRC_REG_DBG_FORCE_VALID_E5, - TGSRC_REG_DBG_FORCE_FRAME_E5, - true, true, DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, - 31 -}; - -static struct block_defs block_umac_defs = { - "umac", - {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCZ, - DBG_BUS_CLIENT_RBCZ}, - UMAC_REG_DBG_SELECT_K2_E5, UMAC_REG_DBG_DWORD_ENABLE_K2_E5, - UMAC_REG_DBG_SHIFT_K2_E5, UMAC_REG_DBG_FORCE_VALID_K2_E5, - UMAC_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV, 6 -}; - -static struct block_defs block_xmac_defs = { - "xmac", {true, false, false}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_dbg_defs = { - "dbg", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, true, DBG_RESET_REG_MISC_PL_PDA_VAUX, 3 -}; - -static struct block_defs block_nig_defs = { - "nig", - {true, true, true}, false, 0, - {DBG_BUS_CLIENT_RBCN, DBG_BUS_CLIENT_RBCN, DBG_BUS_CLIENT_RBCN}, - NIG_REG_DBG_SELECT, NIG_REG_DBG_DWORD_ENABLE, - NIG_REG_DBG_SHIFT, NIG_REG_DBG_FORCE_VALID, - NIG_REG_DBG_FORCE_FRAME, - true, true, DBG_RESET_REG_MISC_PL_PDA_VAUX, 0 -}; - -static struct block_defs block_wol_defs = { - "wol", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ}, - WOL_REG_DBG_SELECT_K2_E5, WOL_REG_DBG_DWORD_ENABLE_K2_E5, - WOL_REG_DBG_SHIFT_K2_E5, WOL_REG_DBG_FORCE_VALID_K2_E5, - WOL_REG_DBG_FORCE_FRAME_K2_E5, - true, true, DBG_RESET_REG_MISC_PL_PDA_VAUX, 7 -}; - -static struct block_defs block_bmbn_defs = { - "bmbn", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCB, - DBG_BUS_CLIENT_RBCB}, - BMBN_REG_DBG_SELECT_K2_E5, BMBN_REG_DBG_DWORD_ENABLE_K2_E5, - BMBN_REG_DBG_SHIFT_K2_E5, BMBN_REG_DBG_FORCE_VALID_K2_E5, - BMBN_REG_DBG_FORCE_FRAME_K2_E5, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_ipc_defs = { - "ipc", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISCS_PL_UA, 8 -}; - -static struct block_defs block_nwm_defs = { - "nwm", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCW, DBG_BUS_CLIENT_RBCW}, - NWM_REG_DBG_SELECT_K2_E5, NWM_REG_DBG_DWORD_ENABLE_K2_E5, - NWM_REG_DBG_SHIFT_K2_E5, NWM_REG_DBG_FORCE_VALID_K2_E5, - NWM_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV_2, 0 -}; - -static struct block_defs block_nws_defs = { - "nws", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCW, DBG_BUS_CLIENT_RBCW}, - NWS_REG_DBG_SELECT_K2_E5, NWS_REG_DBG_DWORD_ENABLE_K2_E5, - NWS_REG_DBG_SHIFT_K2_E5, NWS_REG_DBG_FORCE_VALID_K2_E5, - NWS_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV, 12 -}; - -static struct block_defs block_ms_defs = { - "ms", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCZ, DBG_BUS_CLIENT_RBCZ}, - MS_REG_DBG_SELECT_K2_E5, MS_REG_DBG_DWORD_ENABLE_K2_E5, - MS_REG_DBG_SHIFT_K2_E5, MS_REG_DBG_FORCE_VALID_K2_E5, - MS_REG_DBG_FORCE_FRAME_K2_E5, - true, false, DBG_RESET_REG_MISCS_PL_HV, 13 -}; - -static struct block_defs block_phy_pcie_defs = { - "phy_pcie", - {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, DBG_BUS_CLIENT_RBCH, - DBG_BUS_CLIENT_RBCH}, - PCIE_REG_DBG_COMMON_SELECT_K2_E5, - PCIE_REG_DBG_COMMON_DWORD_ENABLE_K2_E5, - PCIE_REG_DBG_COMMON_SHIFT_K2_E5, - PCIE_REG_DBG_COMMON_FORCE_VALID_K2_E5, - PCIE_REG_DBG_COMMON_FORCE_FRAME_K2_E5, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_led_defs = { - "led", {false, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISCS_PL_HV, 14 -}; - -static struct block_defs block_avs_wrap_defs = { - "avs_wrap", {false, true, false}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - true, false, DBG_RESET_REG_MISCS_PL_UA, 11 -}; - -static struct block_defs block_pxpreqbus_defs = { - "pxpreqbus", {false, false, false}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_misc_aeu_defs = { - "misc_aeu", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs block_bar0_map_defs = { - "bar0_map", {true, true, true}, false, 0, - {MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS, MAX_DBG_BUS_CLIENTS}, - 0, 0, 0, 0, 0, - false, false, MAX_DBG_RESET_REGS, 0 -}; - -static struct block_defs *s_block_defs[MAX_BLOCK_ID] = { - &block_grc_defs, - &block_miscs_defs, - &block_misc_defs, - &block_dbu_defs, - &block_pglue_b_defs, - &block_cnig_defs, - &block_cpmu_defs, - &block_ncsi_defs, - &block_opte_defs, - &block_bmb_defs, - &block_pcie_defs, - &block_mcp_defs, - &block_mcp2_defs, - &block_pswhst_defs, - &block_pswhst2_defs, - &block_pswrd_defs, - &block_pswrd2_defs, - &block_pswwr_defs, - &block_pswwr2_defs, - &block_pswrq_defs, - &block_pswrq2_defs, - &block_pglcs_defs, - &block_dmae_defs, - &block_ptu_defs, - &block_tcm_defs, - &block_mcm_defs, - &block_ucm_defs, - &block_xcm_defs, - &block_ycm_defs, - &block_pcm_defs, - &block_qm_defs, - &block_tm_defs, - &block_dorq_defs, - &block_brb_defs, - &block_src_defs, - &block_prs_defs, - &block_tsdm_defs, - &block_msdm_defs, - &block_usdm_defs, - &block_xsdm_defs, - &block_ysdm_defs, - &block_psdm_defs, - &block_tsem_defs, - &block_msem_defs, - &block_usem_defs, - &block_xsem_defs, - &block_ysem_defs, - &block_psem_defs, - &block_rss_defs, - &block_tmld_defs, - &block_muld_defs, - &block_yuld_defs, - &block_xyld_defs, - &block_ptld_defs, - &block_ypld_defs, - &block_prm_defs, - &block_pbf_pb1_defs, - &block_pbf_pb2_defs, - &block_rpb_defs, - &block_btb_defs, - &block_pbf_defs, - &block_rdif_defs, - &block_tdif_defs, - &block_cdu_defs, - &block_ccfc_defs, - &block_tcfc_defs, - &block_igu_defs, - &block_cau_defs, - &block_rgfs_defs, - &block_rgsrc_defs, - &block_tgfs_defs, - &block_tgsrc_defs, - &block_umac_defs, - &block_xmac_defs, - &block_dbg_defs, - &block_nig_defs, - &block_wol_defs, - &block_bmbn_defs, - &block_ipc_defs, - &block_nwm_defs, - &block_nws_defs, - &block_ms_defs, - &block_phy_pcie_defs, - &block_led_defs, - &block_avs_wrap_defs, - &block_pxpreqbus_defs, - &block_misc_aeu_defs, - &block_bar0_map_defs, -}; - -static struct platform_defs s_platform_defs[] = { + {DBG_BUS_CLIENT_RBCS, DBG_BUS_CLIENT_RBCS}, + true, + PSEM_REG_FAST_MEMORY, + PSEM_REG_DBG_FRAME_MODE_BB_K2, + PSEM_REG_SLOW_DBG_ACTIVE_BB_K2, + PSEM_REG_SLOW_DBG_MODE_BB_K2, + PSEM_REG_DBG_MODE1_CFG_BB_K2, + PSEM_REG_SYNC_DBG_EMPTY, + PSEM_REG_DBG_GPRE_VECT, + PCM_REG_CTX_RBC_ACCS, + {0, PCM_REG_SM_CON_CTX, 0, 0}, + {{0, 10, 0, 0}, {0, 10, 0, 0}} /* {bb} {k2} */ + }, +}; + +static struct hw_type_defs s_hw_type_defs[] = { + /* HW_TYPE_ASIC */ {"asic", 1, 256, 32768}, {"reserved", 0, 0, 0}, {"reserved2", 0, 0, 0}, @@ -1509,161 +579,159 @@ static struct platform_defs s_platform_defs[] = { static struct grc_param_defs s_grc_param_defs[] = { /* DBG_GRC_PARAM_DUMP_TSTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_MSTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_USTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_XSTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_YSTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_PSTORM */ - {{1, 1, 1}, 0, 1, false, false, 1, 1}, + {{1, 1}, 0, 1, false, false, 1, {1, 1}}, /* DBG_GRC_PARAM_DUMP_REGS */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_RAM */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_PBUF */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_IOR */ - {{0, 0, 0}, 0, 1, false, false, 0, 1}, + {{0, 0}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_VFC */ - {{0, 0, 0}, 0, 1, false, false, 0, 1}, + {{0, 0}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_CM_CTX */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_ILT */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_RSS */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_CAU */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_QM */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_MCP */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, - /* DBG_GRC_PARAM_MCP_TRACE_META_SIZE */ - {{1, 1, 1}, 1, 0xffffffff, false, true, 0, 1}, + /* DBG_GRC_PARAM_DUMP_DORQ */ + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_CFC */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_IGU */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_BRB */ - {{0, 0, 0}, 0, 1, false, false, 0, 1}, + {{0, 0}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_BTB */ - {{0, 0, 0}, 0, 1, false, false, 0, 1}, + {{0, 0}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_BMB */ - {{0, 0, 0}, 0, 1, false, false, 0, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, - /* DBG_GRC_PARAM_DUMP_NIG */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + /* DBG_GRC_PARAM_RESERVED1 */ + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_DUMP_MULD */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_PRS */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_DMAE */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_TM */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_SDM */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_DIF */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_STATIC */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_UNSTALL */ - {{0, 0, 0}, 0, 1, false, false, 0, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, - /* DBG_GRC_PARAM_NUM_LCIDS */ - {{MAX_LCIDS, MAX_LCIDS, MAX_LCIDS}, 1, MAX_LCIDS, false, false, - MAX_LCIDS, MAX_LCIDS}, + /* DBG_GRC_PARAM_RESERVED2 */ + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, - /* DBG_GRC_PARAM_NUM_LTIDS */ - {{MAX_LTIDS, MAX_LTIDS, MAX_LTIDS}, 1, MAX_LTIDS, false, false, - MAX_LTIDS, MAX_LTIDS}, + /* DBG_GRC_PARAM_MCP_TRACE_META_SIZE */ + {{0, 0}, 1, 0xffffffff, false, true, 0, {0, 0}}, /* DBG_GRC_PARAM_EXCLUDE_ALL */ - {{0, 0, 0}, 0, 1, true, false, 0, 0}, + {{0, 0}, 0, 1, true, false, 0, {0, 0}}, /* DBG_GRC_PARAM_CRASH */ - {{0, 0, 0}, 0, 1, true, false, 0, 0}, + {{0, 0}, 0, 1, true, false, 0, {0, 0}}, /* DBG_GRC_PARAM_PARITY_SAFE */ - {{0, 0, 0}, 0, 1, false, false, 1, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_DUMP_CM */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{1, 1}, 0, 1, false, false, 0, {1, 1}}, /* DBG_GRC_PARAM_DUMP_PHY */ - {{1, 1, 1}, 0, 1, false, false, 0, 1}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_NO_MCP */ - {{0, 0, 0}, 0, 1, false, false, 0, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_NO_FW_VER */ - {{0, 0, 0}, 0, 1, false, false, 0, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_RESERVED3 */ - {{0, 0, 0}, 0, 1, false, false, 0, 0}, + {{0, 0}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_DUMP_MCP_HW_DUMP */ - {{0, 1, 1}, 0, 1, false, false, 0, 0}, + {{0, 1}, 0, 1, false, false, 0, {0, 1}}, /* DBG_GRC_PARAM_DUMP_ILT_CDUC */ - {{1, 1, 1}, 0, 1, false, false, 0, 0}, + {{1, 1}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_DUMP_ILT_CDUT */ - {{1, 1, 1}, 0, 1, false, false, 0, 0}, + {{1, 1}, 0, 1, false, false, 0, {0, 0}}, /* DBG_GRC_PARAM_DUMP_CAU_EXT */ - {{0, 0, 0}, 0, 1, false, false, 0, 1} + {{0, 0}, 0, 1, false, false, 0, {1, 1}} }; static struct rss_mem_defs s_rss_mem_defs[] = { - { "rss_mem_cid", "rss_cid", 0, 32, - {256, 320, 512} }, + {"rss_mem_cid", "rss_cid", 0, 32, + {256, 320}}, - { "rss_mem_key_msb", "rss_key", 1024, 256, - {128, 208, 257} }, + {"rss_mem_key_msb", "rss_key", 1024, 256, + {128, 208}}, - { "rss_mem_key_lsb", "rss_key", 2048, 64, - {128, 208, 257} }, + {"rss_mem_key_lsb", "rss_key", 2048, 64, + {128, 208}}, - { "rss_mem_info", "rss_info", 3072, 16, - {128, 208, 256} }, + {"rss_mem_info", "rss_info", 3072, 16, + {128, 208}}, - { "rss_mem_ind", "rss_ind", 4096, 16, - {16384, 26624, 32768} } + {"rss_mem_ind", "rss_ind", 4096, 16, + {16384, 26624}} }; static struct vfc_ram_defs s_vfc_ram_defs[] = { @@ -1674,54 +742,31 @@ static struct vfc_ram_defs s_vfc_ram_defs[] = { }; static struct big_ram_defs s_big_ram_defs[] = { - { "BRB", MEM_GROUP_BRB_MEM, MEM_GROUP_BRB_RAM, DBG_GRC_PARAM_DUMP_BRB, - BRB_REG_BIG_RAM_ADDRESS, BRB_REG_BIG_RAM_DATA, - MISC_REG_BLOCK_256B_EN, {0, 0, 0}, - {153600, 180224, 282624} }, - - { "BTB", MEM_GROUP_BTB_MEM, MEM_GROUP_BTB_RAM, DBG_GRC_PARAM_DUMP_BTB, - BTB_REG_BIG_RAM_ADDRESS, BTB_REG_BIG_RAM_DATA, - MISC_REG_BLOCK_256B_EN, {0, 1, 1}, - {92160, 117760, 168960} }, - - { "BMB", MEM_GROUP_BMB_MEM, MEM_GROUP_BMB_RAM, DBG_GRC_PARAM_DUMP_BMB, - BMB_REG_BIG_RAM_ADDRESS, BMB_REG_BIG_RAM_DATA, - MISCS_REG_BLOCK_256B_EN, {0, 0, 0}, - {36864, 36864, 36864} } -}; - -static struct reset_reg_defs s_reset_regs_defs[] = { - /* DBG_RESET_REG_MISCS_PL_UA */ - { MISCS_REG_RESET_PL_UA, - {true, true, true}, {0x0, 0x0, 0x0} }, - - /* DBG_RESET_REG_MISCS_PL_HV */ - { MISCS_REG_RESET_PL_HV, - {true, true, true}, {0x0, 0x400, 0x600} }, + {"BRB", MEM_GROUP_BRB_MEM, MEM_GROUP_BRB_RAM, DBG_GRC_PARAM_DUMP_BRB, + BRB_REG_BIG_RAM_ADDRESS, BRB_REG_BIG_RAM_DATA, + MISC_REG_BLOCK_256B_EN, {0, 0}, + {153600, 180224}}, - /* DBG_RESET_REG_MISCS_PL_HV_2 */ - { MISCS_REG_RESET_PL_HV_2_K2_E5, - {false, true, true}, {0x0, 0x0, 0x0} }, + {"BTB", MEM_GROUP_BTB_MEM, MEM_GROUP_BTB_RAM, DBG_GRC_PARAM_DUMP_BTB, + BTB_REG_BIG_RAM_ADDRESS, BTB_REG_BIG_RAM_DATA, + MISC_REG_BLOCK_256B_EN, {0, 1}, + {92160, 117760}}, - /* DBG_RESET_REG_MISC_PL_UA */ - { MISC_REG_RESET_PL_UA, - {true, true, true}, {0x0, 0x0, 0x0} }, - - /* DBG_RESET_REG_MISC_PL_HV */ - { MISC_REG_RESET_PL_HV, - {true, true, true}, {0x0, 0x0, 0x0} }, - - /* DBG_RESET_REG_MISC_PL_PDA_VMAIN_1 */ - { MISC_REG_RESET_PL_PDA_VMAIN_1, - {true, true, true}, {0x4404040, 0x4404040, 0x404040} }, - - /* DBG_RESET_REG_MISC_PL_PDA_VMAIN_2 */ - { MISC_REG_RESET_PL_PDA_VMAIN_2, - {true, true, true}, {0x7, 0x7c00007, 0x5c08007} }, + {"BMB", MEM_GROUP_BMB_MEM, MEM_GROUP_BMB_RAM, DBG_GRC_PARAM_DUMP_BMB, + BMB_REG_BIG_RAM_ADDRESS, BMB_REG_BIG_RAM_DATA, + MISCS_REG_BLOCK_256B_EN, {0, 0}, + {36864, 36864}} +}; - /* DBG_RESET_REG_MISC_PL_PDA_VAUX */ - { MISC_REG_RESET_PL_PDA_VAUX, - {true, true, true}, {0x2, 0x2, 0x2} }, +static struct rbc_reset_defs s_rbc_reset_defs[] = { + {MISCS_REG_RESET_PL_HV, + {0x0, 0x400}}, + {MISC_REG_RESET_PL_PDA_VMAIN_1, + {0x4404040, 0x4404040}}, + {MISC_REG_RESET_PL_PDA_VMAIN_2, + {0x7, 0x7c00007}}, + {MISC_REG_RESET_PL_PDA_VAUX, + {0x2, 0x2}}, }; static struct phy_defs s_phy_defs[] = { @@ -1804,9 +849,19 @@ static void qed_dbg_grc_init_params(struct qed_hwfn *p_hwfn) } } +/* Sets pointer and size for the specified binary buffer type */ +static void qed_set_dbg_bin_buf(struct qed_hwfn *p_hwfn, + enum bin_dbg_buffer_type buf_type, + const u32 *ptr, u32 size) +{ + struct virt_mem_desc *buf = &p_hwfn->dbg_arrays[buf_type]; + + buf->ptr = (void *)ptr; + buf->size = size; +} + /* Initializes debug data for the specified device */ -static enum dbg_status qed_dbg_dev_init(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt) +static enum dbg_status qed_dbg_dev_init(struct qed_hwfn *p_hwfn) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; u8 num_pfs = 0, max_pfs_per_port = 0; @@ -1831,26 +886,25 @@ static enum dbg_status qed_dbg_dev_init(struct qed_hwfn *p_hwfn, return DBG_STATUS_UNKNOWN_CHIP; } - /* Set platofrm */ - dev_data->platform_id = PLATFORM_ASIC; + /* Set HW type */ + dev_data->hw_type = HW_TYPE_ASIC; dev_data->mode_enable[MODE_ASIC] = 1; /* Set port mode */ - switch (qed_rd(p_hwfn, p_ptt, MISC_REG_PORT_MODE)) { - case 0: + switch (p_hwfn->cdev->num_ports_in_engine) { + case 1: dev_data->mode_enable[MODE_PORTS_PER_ENG_1] = 1; break; - case 1: + case 2: dev_data->mode_enable[MODE_PORTS_PER_ENG_2] = 1; break; - case 2: + case 4: dev_data->mode_enable[MODE_PORTS_PER_ENG_4] = 1; break; } /* Set 100G mode */ - if (dev_data->chip_id == CHIP_BB && - qed_rd(p_hwfn, p_ptt, CNIG_REG_NW_PORT_MODE_BB) == 2) + if (QED_IS_CMT(p_hwfn->cdev)) dev_data->mode_enable[MODE_100G] = 1; /* Set number of ports */ @@ -1876,14 +930,36 @@ static enum dbg_status qed_dbg_dev_init(struct qed_hwfn *p_hwfn, return DBG_STATUS_OK; } -static struct dbg_bus_block *get_dbg_bus_block_desc(struct qed_hwfn *p_hwfn, - enum block_id block_id) +static const struct dbg_block *get_dbg_block(struct qed_hwfn *p_hwfn, + enum block_id block_id) +{ + const struct dbg_block *dbg_block; + + dbg_block = p_hwfn->dbg_arrays[BIN_BUF_DBG_BLOCKS].ptr; + return dbg_block + block_id; +} + +static const struct dbg_block_chip *qed_get_dbg_block_per_chip(struct qed_hwfn + *p_hwfn, + enum block_id + block_id) +{ + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; + + return (const struct dbg_block_chip *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_BLOCKS_CHIP_DATA].ptr + + block_id * MAX_CHIP_IDS + dev_data->chip_id; +} + +static const struct dbg_reset_reg *qed_get_dbg_reset_reg(struct qed_hwfn + *p_hwfn, + u8 reset_reg_id) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - return (struct dbg_bus_block *)&dbg_bus_blocks[block_id * - MAX_CHIP_IDS + - dev_data->chip_id]; + return (const struct dbg_reset_reg *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_RESET_REGS].ptr + + reset_reg_id * MAX_CHIP_IDS + dev_data->chip_id; } /* Reads the FW info structure for the specified Storm from the chip, @@ -1904,8 +980,9 @@ static void qed_read_storm_fw_info(struct qed_hwfn *p_hwfn, * The address is located in the last line of the Storm RAM. */ addr = storm->sem_fast_mem_addr + SEM_FAST_REG_INT_RAM + - DWORDS_TO_BYTES(SEM_FAST_REG_INT_RAM_SIZE_BB_K2) - - sizeof(fw_info_location); + DWORDS_TO_BYTES(SEM_FAST_REG_INT_RAM_SIZE) - + sizeof(fw_info_location); + dest = (u32 *)&fw_info_location; for (i = 0; i < BYTES_TO_DWORDS(sizeof(fw_info_location)); @@ -2100,6 +1177,29 @@ static u32 qed_dump_mfw_ver_param(struct qed_hwfn *p_hwfn, return qed_dump_str_param(dump_buf, dump, "mfw-version", mfw_ver_str); } +/* Reads the chip revision from the chip and writes it as a param to the + * specified buffer. Returns the dumped size in dwords. + */ +static u32 qed_dump_chip_revision_param(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 *dump_buf, bool dump) +{ + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; + char param_str[3] = "??"; + + if (dev_data->hw_type == HW_TYPE_ASIC) { + u32 chip_rev, chip_metal; + + chip_rev = qed_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_REV); + chip_metal = qed_rd(p_hwfn, p_ptt, MISCS_REG_CHIP_METAL); + + param_str[0] = 'a' + (u8)chip_rev; + param_str[1] = '0' + (u8)chip_metal; + } + + return qed_dump_str_param(dump_buf, dump, "chip-revision", param_str); +} + /* Writes a section header to the specified buffer. * Returns the dumped size in dwords. */ @@ -2123,7 +1223,8 @@ static u32 qed_dump_common_global_params(struct qed_hwfn *p_hwfn, u8 num_params; /* Dump global params section header */ - num_params = NUM_COMMON_GLOBAL_PARAMS + num_specific_global_params; + num_params = NUM_COMMON_GLOBAL_PARAMS + num_specific_global_params + + (dev_data->chip_id == CHIP_BB ? 1 : 0); offset += qed_dump_section_hdr(dump_buf + offset, dump, "global_params", num_params); @@ -2131,6 +1232,8 @@ static u32 qed_dump_common_global_params(struct qed_hwfn *p_hwfn, offset += qed_dump_fw_ver_param(p_hwfn, p_ptt, dump_buf + offset, dump); offset += qed_dump_mfw_ver_param(p_hwfn, p_ptt, dump_buf + offset, dump); + offset += qed_dump_chip_revision_param(p_hwfn, + p_ptt, dump_buf + offset, dump); offset += qed_dump_num_param(dump_buf + offset, dump, "tools-version", TOOLS_VERSION); offset += qed_dump_str_param(dump_buf + offset, @@ -2140,11 +1243,12 @@ static u32 qed_dump_common_global_params(struct qed_hwfn *p_hwfn, offset += qed_dump_str_param(dump_buf + offset, dump, "platform", - s_platform_defs[dev_data->platform_id]. - name); - offset += - qed_dump_num_param(dump_buf + offset, dump, "pci-func", - p_hwfn->abs_pf_id); + s_hw_type_defs[dev_data->hw_type].name); + offset += qed_dump_num_param(dump_buf + offset, + dump, "pci-func", p_hwfn->abs_pf_id); + if (dev_data->chip_id == CHIP_BB) + offset += qed_dump_num_param(dump_buf + offset, + dump, "path", QED_PATH_ID(p_hwfn)); return offset; } @@ -2175,24 +1279,87 @@ static void qed_update_blocks_reset_state(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - u32 reg_val[MAX_DBG_RESET_REGS] = { 0 }; - u32 i; + u32 reg_val[NUM_DBG_RESET_REGS] = { 0 }; + u8 rst_reg_id; + u32 blk_id; /* Read reset registers */ - for (i = 0; i < MAX_DBG_RESET_REGS; i++) - if (s_reset_regs_defs[i].exists[dev_data->chip_id]) - reg_val[i] = qed_rd(p_hwfn, - p_ptt, s_reset_regs_defs[i].addr); + for (rst_reg_id = 0; rst_reg_id < NUM_DBG_RESET_REGS; rst_reg_id++) { + const struct dbg_reset_reg *rst_reg; + bool rst_reg_removed; + u32 rst_reg_addr; + + rst_reg = qed_get_dbg_reset_reg(p_hwfn, rst_reg_id); + rst_reg_removed = GET_FIELD(rst_reg->data, + DBG_RESET_REG_IS_REMOVED); + rst_reg_addr = DWORDS_TO_BYTES(GET_FIELD(rst_reg->data, + DBG_RESET_REG_ADDR)); + + if (!rst_reg_removed) + reg_val[rst_reg_id] = qed_rd(p_hwfn, p_ptt, + rst_reg_addr); + } /* Check if blocks are in reset */ - for (i = 0; i < MAX_BLOCK_ID; i++) { - struct block_defs *block = s_block_defs[i]; + for (blk_id = 0; blk_id < NUM_PHYS_BLOCKS; blk_id++) { + const struct dbg_block_chip *blk; + bool has_rst_reg; + bool is_removed; + + blk = qed_get_dbg_block_per_chip(p_hwfn, (enum block_id)blk_id); + is_removed = GET_FIELD(blk->flags, DBG_BLOCK_CHIP_IS_REMOVED); + has_rst_reg = GET_FIELD(blk->flags, + DBG_BLOCK_CHIP_HAS_RESET_REG); + + if (!is_removed && has_rst_reg) + dev_data->block_in_reset[blk_id] = + !(reg_val[blk->reset_reg_id] & + BIT(blk->reset_reg_bit_offset)); + } +} + +/* is_mode_match recursive function */ +static bool qed_is_mode_match_rec(struct qed_hwfn *p_hwfn, + u16 *modes_buf_offset, u8 rec_depth) +{ + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; + u8 *dbg_array; + bool arg1, arg2; + u8 tree_val; + + if (rec_depth > MAX_RECURSION_DEPTH) { + DP_NOTICE(p_hwfn, + "Unexpected error: is_mode_match_rec exceeded the max recursion depth. This is probably due to a corrupt init/debug buffer.\n"); + return false; + } + + /* Get next element from modes tree buffer */ + dbg_array = p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr; + tree_val = dbg_array[(*modes_buf_offset)++]; - dev_data->block_in_reset[i] = block->has_reset_bit && - !(reg_val[block->reset_reg] & BIT(block->reset_bit_offset)); + switch (tree_val) { + case INIT_MODE_OP_NOT: + return !qed_is_mode_match_rec(p_hwfn, + modes_buf_offset, rec_depth + 1); + case INIT_MODE_OP_OR: + case INIT_MODE_OP_AND: + arg1 = qed_is_mode_match_rec(p_hwfn, + modes_buf_offset, rec_depth + 1); + arg2 = qed_is_mode_match_rec(p_hwfn, + modes_buf_offset, rec_depth + 1); + return (tree_val == INIT_MODE_OP_OR) ? (arg1 || + arg2) : (arg1 && arg2); + default: + return dev_data->mode_enable[tree_val - MAX_INIT_MODE_OPS] > 0; } } +/* Returns true if the mode (specified using modes_buf_offset) is enabled */ +static bool qed_is_mode_match(struct qed_hwfn *p_hwfn, u16 *modes_buf_offset) +{ + return qed_is_mode_match_rec(p_hwfn, modes_buf_offset, 0); +} + /* Enable / disable the Debug block */ static void qed_bus_enable_dbg_block(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, bool enable) @@ -2204,23 +1371,21 @@ static void qed_bus_enable_dbg_block(struct qed_hwfn *p_hwfn, static void qed_bus_reset_dbg_block(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) { - u32 dbg_reset_reg_addr, old_reset_reg_val, new_reset_reg_val; - struct block_defs *dbg_block = s_block_defs[BLOCK_DBG]; + u32 reset_reg_addr, old_reset_reg_val, new_reset_reg_val; + const struct dbg_reset_reg *reset_reg; + const struct dbg_block_chip *block; - dbg_reset_reg_addr = s_reset_regs_defs[dbg_block->reset_reg].addr; - old_reset_reg_val = qed_rd(p_hwfn, p_ptt, dbg_reset_reg_addr); - new_reset_reg_val = - old_reset_reg_val & ~BIT(dbg_block->reset_bit_offset); + block = qed_get_dbg_block_per_chip(p_hwfn, BLOCK_DBG); + reset_reg = qed_get_dbg_reset_reg(p_hwfn, block->reset_reg_id); + reset_reg_addr = + DWORDS_TO_BYTES(GET_FIELD(reset_reg->data, DBG_RESET_REG_ADDR)); - qed_wr(p_hwfn, p_ptt, dbg_reset_reg_addr, new_reset_reg_val); - qed_wr(p_hwfn, p_ptt, dbg_reset_reg_addr, old_reset_reg_val); -} + old_reset_reg_val = qed_rd(p_hwfn, p_ptt, reset_reg_addr); + new_reset_reg_val = + old_reset_reg_val & ~BIT(block->reset_reg_bit_offset); -static void qed_bus_set_framing_mode(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - enum dbg_bus_frame_modes mode) -{ - qed_wr(p_hwfn, p_ptt, DBG_REG_FRAMING_MODE, (u8)mode); + qed_wr(p_hwfn, p_ptt, reset_reg_addr, new_reset_reg_val); + qed_wr(p_hwfn, p_ptt, reset_reg_addr, old_reset_reg_val); } /* Enable / disable Debug Bus clients according to the specified mask @@ -2232,28 +1397,65 @@ static void qed_bus_enable_clients(struct qed_hwfn *p_hwfn, qed_wr(p_hwfn, p_ptt, DBG_REG_CLIENT_ENABLE, client_mask); } -static bool qed_is_mode_match(struct qed_hwfn *p_hwfn, u16 *modes_buf_offset) +static void qed_bus_config_dbg_line(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + enum block_id block_id, + u8 line_id, + u8 enable_mask, + u8 right_shift, + u8 force_valid_mask, u8 force_frame_mask) +{ + const struct dbg_block_chip *block = + qed_get_dbg_block_per_chip(p_hwfn, block_id); + + qed_wr(p_hwfn, p_ptt, DWORDS_TO_BYTES(block->dbg_select_reg_addr), + line_id); + qed_wr(p_hwfn, p_ptt, DWORDS_TO_BYTES(block->dbg_dword_enable_reg_addr), + enable_mask); + qed_wr(p_hwfn, p_ptt, DWORDS_TO_BYTES(block->dbg_shift_reg_addr), + right_shift); + qed_wr(p_hwfn, p_ptt, DWORDS_TO_BYTES(block->dbg_force_valid_reg_addr), + force_valid_mask); + qed_wr(p_hwfn, p_ptt, DWORDS_TO_BYTES(block->dbg_force_frame_reg_addr), + force_frame_mask); +} + +/* Disable debug bus in all blocks */ +static void qed_bus_disable_blocks(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - bool arg1, arg2; - const u32 *ptr; - u8 tree_val; + u32 block_id; - /* Get next element from modes tree buffer */ - ptr = s_dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr; - tree_val = ((u8 *)ptr)[(*modes_buf_offset)++]; + /* Disable all blocks */ + for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { + const struct dbg_block_chip *block_per_chip = + qed_get_dbg_block_per_chip(p_hwfn, + (enum block_id)block_id); - switch (tree_val) { - case INIT_MODE_OP_NOT: - return !qed_is_mode_match(p_hwfn, modes_buf_offset); - case INIT_MODE_OP_OR: - case INIT_MODE_OP_AND: - arg1 = qed_is_mode_match(p_hwfn, modes_buf_offset); - arg2 = qed_is_mode_match(p_hwfn, modes_buf_offset); - return (tree_val == INIT_MODE_OP_OR) ? (arg1 || - arg2) : (arg1 && arg2); - default: - return dev_data->mode_enable[tree_val - MAX_INIT_MODE_OPS] > 0; + if (GET_FIELD(block_per_chip->flags, + DBG_BLOCK_CHIP_IS_REMOVED) || + dev_data->block_in_reset[block_id]) + continue; + + /* Disable debug bus */ + if (GET_FIELD(block_per_chip->flags, + DBG_BLOCK_CHIP_HAS_DBG_BUS)) { + u32 dbg_en_addr = + block_per_chip->dbg_dword_enable_reg_addr; + u16 modes_buf_offset = + GET_FIELD(block_per_chip->dbg_bus_mode.data, + DBG_MODE_HDR_MODES_BUF_OFFSET); + bool eval_mode = + GET_FIELD(block_per_chip->dbg_bus_mode.data, + DBG_MODE_HDR_EVAL_MODE) > 0; + + if (!eval_mode || + qed_is_mode_match(p_hwfn, &modes_buf_offset)) + qed_wr(p_hwfn, p_ptt, + DWORDS_TO_BYTES(dbg_en_addr), + 0); + } } } @@ -2266,6 +1468,20 @@ static bool qed_grc_is_included(struct qed_hwfn *p_hwfn, return qed_grc_get_param(p_hwfn, grc_param) > 0; } +/* Returns the storm_id that matches the specified Storm letter, + * or MAX_DBG_STORMS if invalid storm letter. + */ +static enum dbg_storms qed_get_id_from_letter(char storm_letter) +{ + u8 storm_id; + + for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) + if (s_storm_defs[storm_id].letter == storm_letter) + return (enum dbg_storms)storm_id; + + return MAX_DBG_STORMS; +} + /* Returns true of the specified Storm should be included in the dump, false * otherwise. */ @@ -2281,14 +1497,20 @@ static bool qed_grc_is_storm_included(struct qed_hwfn *p_hwfn, static bool qed_grc_is_mem_included(struct qed_hwfn *p_hwfn, enum block_id block_id, u8 mem_group_id) { - struct block_defs *block = s_block_defs[block_id]; + const struct dbg_block *block; u8 i; - /* Check Storm match */ - if (block->associated_to_storm && - !qed_grc_is_storm_included(p_hwfn, - (enum dbg_storms)block->storm_id)) - return false; + block = get_dbg_block(p_hwfn, block_id); + + /* If the block is associated with a Storm, check Storm match */ + if (block->associated_storm_letter) { + enum dbg_storms associated_storm_id = + qed_get_id_from_letter(block->associated_storm_letter); + + if (associated_storm_id == MAX_DBG_STORMS || + !qed_grc_is_storm_included(p_hwfn, associated_storm_id)) + return false; + } for (i = 0; i < NUM_BIG_RAM_TYPES; i++) { struct big_ram_defs *big_ram = &s_big_ram_defs[i]; @@ -2310,6 +1532,8 @@ static bool qed_grc_is_mem_included(struct qed_hwfn *p_hwfn, case MEM_GROUP_CAU_SB: case MEM_GROUP_CAU_PI: return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_CAU); + case MEM_GROUP_CAU_MEM_EXT: + return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_CAU_EXT); case MEM_GROUP_QM_MEM: return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_QM); case MEM_GROUP_CFC_MEM: @@ -2317,6 +1541,8 @@ static bool qed_grc_is_mem_included(struct qed_hwfn *p_hwfn, case MEM_GROUP_TASK_CFC_MEM: return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_CFC) || qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_CM_CTX); + case MEM_GROUP_DORQ_MEM: + return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_DORQ); case MEM_GROUP_IGU_MEM: case MEM_GROUP_IGU_MSIX: return qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_IGU); @@ -2362,64 +1588,104 @@ static void qed_grc_stall_storms(struct qed_hwfn *p_hwfn, msleep(STALL_DELAY_MS); } -/* Takes all blocks out of reset */ +/* Takes all blocks out of reset. If rbc_only is true, only RBC clients are + * taken out of reset. + */ static void qed_grc_unreset_blocks(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt) + struct qed_ptt *p_ptt, bool rbc_only) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - u32 reg_val[MAX_DBG_RESET_REGS] = { 0 }; - u32 block_id, i; + u8 chip_id = dev_data->chip_id; + u32 i; - /* Fill reset regs values */ - for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { - struct block_defs *block = s_block_defs[block_id]; + /* Take RBCs out of reset */ + for (i = 0; i < ARRAY_SIZE(s_rbc_reset_defs); i++) + if (s_rbc_reset_defs[i].reset_val[dev_data->chip_id]) + qed_wr(p_hwfn, + p_ptt, + s_rbc_reset_defs[i].reset_reg_addr + + RESET_REG_UNRESET_OFFSET, + s_rbc_reset_defs[i].reset_val[chip_id]); - if (block->exists[dev_data->chip_id] && block->has_reset_bit && - block->unreset) - reg_val[block->reset_reg] |= - BIT(block->reset_bit_offset); - } + if (!rbc_only) { + u32 reg_val[NUM_DBG_RESET_REGS] = { 0 }; + u8 reset_reg_id; + u32 block_id; - /* Write reset registers */ - for (i = 0; i < MAX_DBG_RESET_REGS; i++) { - if (!s_reset_regs_defs[i].exists[dev_data->chip_id]) - continue; + /* Fill reset regs values */ + for (block_id = 0; block_id < NUM_PHYS_BLOCKS; block_id++) { + bool is_removed, has_reset_reg, unreset_before_dump; + const struct dbg_block_chip *block; + + block = qed_get_dbg_block_per_chip(p_hwfn, + (enum block_id) + block_id); + is_removed = + GET_FIELD(block->flags, DBG_BLOCK_CHIP_IS_REMOVED); + has_reset_reg = + GET_FIELD(block->flags, + DBG_BLOCK_CHIP_HAS_RESET_REG); + unreset_before_dump = + GET_FIELD(block->flags, + DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP); + + if (!is_removed && has_reset_reg && unreset_before_dump) + reg_val[block->reset_reg_id] |= + BIT(block->reset_reg_bit_offset); + } - reg_val[i] |= - s_reset_regs_defs[i].unreset_val[dev_data->chip_id]; + /* Write reset registers */ + for (reset_reg_id = 0; reset_reg_id < NUM_DBG_RESET_REGS; + reset_reg_id++) { + const struct dbg_reset_reg *reset_reg; + u32 reset_reg_addr; - if (reg_val[i]) - qed_wr(p_hwfn, - p_ptt, - s_reset_regs_defs[i].addr + - RESET_REG_UNRESET_OFFSET, reg_val[i]); + reset_reg = qed_get_dbg_reset_reg(p_hwfn, reset_reg_id); + + if (GET_FIELD + (reset_reg->data, DBG_RESET_REG_IS_REMOVED)) + continue; + + if (reg_val[reset_reg_id]) { + reset_reg_addr = + GET_FIELD(reset_reg->data, + DBG_RESET_REG_ADDR); + qed_wr(p_hwfn, + p_ptt, + DWORDS_TO_BYTES(reset_reg_addr) + + RESET_REG_UNRESET_OFFSET, + reg_val[reset_reg_id]); + } + } } } /* Returns the attention block data of the specified block */ static const struct dbg_attn_block_type_data * -qed_get_block_attn_data(enum block_id block_id, enum dbg_attn_type attn_type) +qed_get_block_attn_data(struct qed_hwfn *p_hwfn, + enum block_id block_id, enum dbg_attn_type attn_type) { const struct dbg_attn_block *base_attn_block_arr = - (const struct dbg_attn_block *) - s_dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr; + (const struct dbg_attn_block *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr; return &base_attn_block_arr[block_id].per_type_data[attn_type]; } /* Returns the attention registers of the specified block */ static const struct dbg_attn_reg * -qed_get_block_attn_regs(enum block_id block_id, enum dbg_attn_type attn_type, +qed_get_block_attn_regs(struct qed_hwfn *p_hwfn, + enum block_id block_id, enum dbg_attn_type attn_type, u8 *num_attn_regs) { const struct dbg_attn_block_type_data *block_type_data = - qed_get_block_attn_data(block_id, attn_type); + qed_get_block_attn_data(p_hwfn, block_id, attn_type); *num_attn_regs = block_type_data->num_regs; - return &((const struct dbg_attn_reg *) - s_dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr)[block_type_data-> - regs_offset]; + return (const struct dbg_attn_reg *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr + + block_type_data->regs_offset; } /* For each block, clear the status of all parities */ @@ -2431,11 +1697,12 @@ static void qed_grc_clear_all_prty(struct qed_hwfn *p_hwfn, u8 reg_idx, num_attn_regs; u32 block_id; - for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { + for (block_id = 0; block_id < NUM_PHYS_BLOCKS; block_id++) { if (dev_data->block_in_reset[block_id]) continue; - attn_reg_arr = qed_get_block_attn_regs((enum block_id)block_id, + attn_reg_arr = qed_get_block_attn_regs(p_hwfn, + (enum block_id)block_id, ATTN_TYPE_PARITY, &num_attn_regs); @@ -2463,22 +1730,20 @@ static void qed_grc_clear_all_prty(struct qed_hwfn *p_hwfn, } /* Dumps GRC registers section header. Returns the dumped size in dwords. - * The following parameters are dumped: + * the following parameters are dumped: * - count: no. of dumped entries * - split_type: split type * - split_id: split ID (dumped only if split_id != SPLIT_TYPE_NONE) - * - param_name: user parameter value (dumped only if param_name != NULL - * and param_val != NULL). + * - reg_type_name: register type name (dumped only if reg_type_name != NULL) */ static u32 qed_grc_dump_regs_hdr(u32 *dump_buf, bool dump, u32 num_reg_entries, enum init_split_types split_type, - u8 split_id, - const char *param_name, const char *param_val) + u8 split_id, const char *reg_type_name) { u8 num_params = 2 + - (split_type != SPLIT_TYPE_NONE ? 1 : 0) + (param_name ? 1 : 0); + (split_type != SPLIT_TYPE_NONE ? 1 : 0) + (reg_type_name ? 1 : 0); u32 offset = 0; offset += qed_dump_section_hdr(dump_buf + offset, @@ -2491,9 +1756,9 @@ static u32 qed_grc_dump_regs_hdr(u32 *dump_buf, if (split_type != SPLIT_TYPE_NONE) offset += qed_dump_num_param(dump_buf + offset, dump, "id", split_id); - if (param_name && param_val) + if (reg_type_name) offset += qed_dump_str_param(dump_buf + offset, - dump, param_name, param_val); + dump, "type", reg_type_name); return offset; } @@ -2523,20 +1788,11 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn, { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; u8 port_id = 0, pf_id = 0, vf_id = 0, fid = 0; + bool read_using_dmae = false; + u32 thresh; - if (!dump) - return len; - - /* Print log if needed */ - dev_data->num_regs_read += len; - if (dev_data->num_regs_read >= - s_platform_defs[dev_data->platform_id].log_thresh) { - DP_VERBOSE(p_hwfn, - QED_MSG_DEBUG, - "Dumping %d registers...\n", - dev_data->num_regs_read); - dev_data->num_regs_read = 0; - } + if (!dump) + return len; switch (split_type) { case SPLIT_TYPE_PORT: @@ -2558,38 +1814,77 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn, } /* Try reading using DMAE */ - if (dev_data->use_dmae && split_type == SPLIT_TYPE_NONE && - (len >= s_platform_defs[dev_data->platform_id].dmae_thresh || - wide_bus)) { - if (!qed_dmae_grc2host(p_hwfn, p_ptt, DWORDS_TO_BYTES(addr), - (u64)(uintptr_t)(dump_buf), len, NULL)) - return len; - dev_data->use_dmae = 0; - DP_VERBOSE(p_hwfn, - QED_MSG_DEBUG, - "Failed reading from chip using DMAE, using GRC instead\n"); + if (dev_data->use_dmae && split_type != SPLIT_TYPE_VF && + (len >= s_hw_type_defs[dev_data->hw_type].dmae_thresh || + (PROTECT_WIDE_BUS && wide_bus))) { + struct qed_dmae_params dmae_params; + + /* Set DMAE params */ + memset(&dmae_params, 0, sizeof(dmae_params)); + SET_FIELD(dmae_params.flags, QED_DMAE_PARAMS_COMPLETION_DST, 1); + switch (split_type) { + case SPLIT_TYPE_PORT: + SET_FIELD(dmae_params.flags, QED_DMAE_PARAMS_PORT_VALID, + 1); + dmae_params.port_id = port_id; + break; + case SPLIT_TYPE_PF: + SET_FIELD(dmae_params.flags, + QED_DMAE_PARAMS_SRC_PF_VALID, 1); + dmae_params.src_pfid = pf_id; + break; + case SPLIT_TYPE_PORT_PF: + SET_FIELD(dmae_params.flags, QED_DMAE_PARAMS_PORT_VALID, + 1); + SET_FIELD(dmae_params.flags, + QED_DMAE_PARAMS_SRC_PF_VALID, 1); + dmae_params.port_id = port_id; + dmae_params.src_pfid = pf_id; + break; + default: + break; + } + + /* Execute DMAE command */ + read_using_dmae = !qed_dmae_grc2host(p_hwfn, + p_ptt, + DWORDS_TO_BYTES(addr), + (u64)(uintptr_t)(dump_buf), + len, &dmae_params); + if (!read_using_dmae) { + dev_data->use_dmae = 0; + DP_VERBOSE(p_hwfn, + QED_MSG_DEBUG, + "Failed reading from chip using DMAE, using GRC instead\n"); + } } + if (read_using_dmae) + goto print_log; + /* If not read using DMAE, read using GRC */ /* Set pretend */ - if (split_type != dev_data->pretend.split_type || split_id != - dev_data->pretend.split_id) { + if (split_type != dev_data->pretend.split_type || + split_id != dev_data->pretend.split_id) { switch (split_type) { case SPLIT_TYPE_PORT: qed_port_pretend(p_hwfn, p_ptt, port_id); break; case SPLIT_TYPE_PF: - fid = pf_id << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT; + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, + pf_id); qed_fid_pretend(p_hwfn, p_ptt, fid); break; case SPLIT_TYPE_PORT_PF: - fid = pf_id << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT; + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, + pf_id); qed_port_fid_pretend(p_hwfn, p_ptt, port_id, fid); break; case SPLIT_TYPE_VF: - fid = BIT(PXP_PRETEND_CONCRETE_FID_VFVALID_SHIFT) | - (vf_id << PXP_PRETEND_CONCRETE_FID_VFID_SHIFT); + fid = FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_VFVALID, 1) + | FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_VFID, + vf_id); qed_fid_pretend(p_hwfn, p_ptt, fid); break; default: @@ -2603,6 +1898,16 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn, /* Read registers using GRC */ qed_read_regs(p_hwfn, p_ptt, dump_buf, addr, len); +print_log: + /* Print log */ + dev_data->num_regs_read += len; + thresh = s_hw_type_defs[dev_data->hw_type].log_thresh; + if ((dev_data->num_regs_read / thresh) > + ((dev_data->num_regs_read - len) / thresh)) + DP_VERBOSE(p_hwfn, + QED_MSG_DEBUG, + "Dumped %d registers...\n", dev_data->num_regs_read); + return len; } @@ -2687,7 +1992,7 @@ static u32 qed_grc_dump_reg_entry_skip(struct qed_hwfn *p_hwfn, /* Dumps GRC registers entries. Returns the dumped size in dwords. */ static u32 qed_grc_dump_regs_entries(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - struct dbg_array input_regs_arr, + struct virt_mem_desc input_regs_arr, u32 *dump_buf, bool dump, enum init_split_types split_type, @@ -2700,10 +2005,10 @@ static u32 qed_grc_dump_regs_entries(struct qed_hwfn *p_hwfn, *num_dumped_reg_entries = 0; - while (input_offset < input_regs_arr.size_in_dwords) { + while (input_offset < BYTES_TO_DWORDS(input_regs_arr.size)) { const struct dbg_dump_cond_hdr *cond_hdr = (const struct dbg_dump_cond_hdr *) - &input_regs_arr.ptr[input_offset++]; + input_regs_arr.ptr + input_offset++; u16 modes_buf_offset; bool eval_mode; @@ -2726,7 +2031,7 @@ static u32 qed_grc_dump_regs_entries(struct qed_hwfn *p_hwfn, for (i = 0; i < cond_hdr->data_size; i++, input_offset++) { const struct dbg_dump_reg *reg = (const struct dbg_dump_reg *) - &input_regs_arr.ptr[input_offset]; + input_regs_arr.ptr + input_offset; u32 addr, len; bool wide_bus; @@ -2751,14 +2056,12 @@ static u32 qed_grc_dump_regs_entries(struct qed_hwfn *p_hwfn, /* Dumps GRC registers entries. Returns the dumped size in dwords. */ static u32 qed_grc_dump_split_data(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - struct dbg_array input_regs_arr, + struct virt_mem_desc input_regs_arr, u32 *dump_buf, bool dump, bool block_enable[MAX_BLOCK_ID], enum init_split_types split_type, - u8 split_id, - const char *param_name, - const char *param_val) + u8 split_id, const char *reg_type_name) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; enum init_split_types hdr_split_type = split_type; @@ -2776,7 +2079,7 @@ static u32 qed_grc_dump_split_data(struct qed_hwfn *p_hwfn, false, 0, hdr_split_type, - hdr_split_id, param_name, param_val); + hdr_split_id, reg_type_name); /* Dump registers */ offset += qed_grc_dump_regs_entries(p_hwfn, @@ -2795,7 +2098,7 @@ static u32 qed_grc_dump_split_data(struct qed_hwfn *p_hwfn, dump, num_dumped_reg_entries, hdr_split_type, - hdr_split_id, param_name, param_val); + hdr_split_id, reg_type_name); return num_dumped_reg_entries > 0 ? offset : 0; } @@ -2808,32 +2111,33 @@ static u32 qed_grc_dump_registers(struct qed_hwfn *p_hwfn, u32 *dump_buf, bool dump, bool block_enable[MAX_BLOCK_ID], - const char *param_name, const char *param_val) + const char *reg_type_name) { + struct virt_mem_desc *dbg_buf = + &p_hwfn->dbg_arrays[BIN_BUF_DBG_DUMP_REG]; struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; u32 offset = 0, input_offset = 0; - u16 fid; - while (input_offset < - s_dbg_arrays[BIN_BUF_DBG_DUMP_REG].size_in_dwords) { + + while (input_offset < BYTES_TO_DWORDS(dbg_buf->size)) { const struct dbg_dump_split_hdr *split_hdr; - struct dbg_array curr_input_regs_arr; + struct virt_mem_desc curr_input_regs_arr; enum init_split_types split_type; u16 split_count = 0; u32 split_data_size; u8 split_id; split_hdr = - (const struct dbg_dump_split_hdr *) - &s_dbg_arrays[BIN_BUF_DBG_DUMP_REG].ptr[input_offset++]; + (const struct dbg_dump_split_hdr *) + dbg_buf->ptr + input_offset++; split_type = - GET_FIELD(split_hdr->hdr, - DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID); - split_data_size = - GET_FIELD(split_hdr->hdr, - DBG_DUMP_SPLIT_HDR_DATA_SIZE); + GET_FIELD(split_hdr->hdr, + DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID); + split_data_size = GET_FIELD(split_hdr->hdr, + DBG_DUMP_SPLIT_HDR_DATA_SIZE); curr_input_regs_arr.ptr = - &s_dbg_arrays[BIN_BUF_DBG_DUMP_REG].ptr[input_offset]; - curr_input_regs_arr.size_in_dwords = split_data_size; + (u32 *)p_hwfn->dbg_arrays[BIN_BUF_DBG_DUMP_REG].ptr + + input_offset; + curr_input_regs_arr.size = DWORDS_TO_BYTES(split_data_size); switch (split_type) { case SPLIT_TYPE_NONE: @@ -2861,16 +2165,16 @@ static u32 qed_grc_dump_registers(struct qed_hwfn *p_hwfn, dump, block_enable, split_type, split_id, - param_name, - param_val); + reg_type_name); input_offset += split_data_size; } /* Cancel pretends (pretend to original PF) */ if (dump) { - fid = p_hwfn->rel_pf_id << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT; - qed_fid_pretend(p_hwfn, p_ptt, fid); + qed_fid_pretend(p_hwfn, p_ptt, + FIELD_VALUE(PXP_PRETEND_CONCRETE_FID_PFID, + p_hwfn->rel_pf_id)); dev_data->pretend.split_type = SPLIT_TYPE_NONE; dev_data->pretend.split_id = 0; } @@ -2883,26 +2187,32 @@ static u32 qed_grc_dump_reset_regs(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { - struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - u32 i, offset = 0, num_regs = 0; + u32 offset = 0, num_regs = 0; + u8 reset_reg_id; /* Calculate header size */ offset += qed_grc_dump_regs_hdr(dump_buf, - false, 0, - SPLIT_TYPE_NONE, 0, NULL, NULL); + false, + 0, SPLIT_TYPE_NONE, 0, "RESET_REGS"); /* Write reset registers */ - for (i = 0; i < MAX_DBG_RESET_REGS; i++) { - if (!s_reset_regs_defs[i].exists[dev_data->chip_id]) + for (reset_reg_id = 0; reset_reg_id < NUM_DBG_RESET_REGS; + reset_reg_id++) { + const struct dbg_reset_reg *reset_reg; + u32 reset_reg_addr; + + reset_reg = qed_get_dbg_reset_reg(p_hwfn, reset_reg_id); + + if (GET_FIELD(reset_reg->data, DBG_RESET_REG_IS_REMOVED)) continue; + reset_reg_addr = GET_FIELD(reset_reg->data, DBG_RESET_REG_ADDR); offset += qed_grc_dump_reg_entry(p_hwfn, p_ptt, dump_buf + offset, dump, - BYTES_TO_DWORDS - (s_reset_regs_defs[i].addr), 1, - false, SPLIT_TYPE_NONE, 0); + reset_reg_addr, + 1, false, SPLIT_TYPE_NONE, 0); num_regs++; } @@ -2910,7 +2220,7 @@ static u32 qed_grc_dump_reset_regs(struct qed_hwfn *p_hwfn, if (dump) qed_grc_dump_regs_hdr(dump_buf, true, num_regs, SPLIT_TYPE_NONE, - 0, NULL, NULL); + 0, "RESET_REGS"); return offset; } @@ -2923,21 +2233,23 @@ static u32 qed_grc_dump_modified_regs(struct qed_hwfn *p_hwfn, u32 *dump_buf, bool dump) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - u32 block_id, offset = 0, num_reg_entries = 0; + u32 block_id, offset = 0, stall_regs_offset; const struct dbg_attn_reg *attn_reg_arr; u8 storm_id, reg_idx, num_attn_regs; + u32 num_reg_entries = 0; - /* Calculate header size */ + /* Write empty header for attention registers */ offset += qed_grc_dump_regs_hdr(dump_buf, - false, 0, SPLIT_TYPE_NONE, - 0, NULL, NULL); + false, + 0, SPLIT_TYPE_NONE, 0, "ATTN_REGS"); /* Write parity registers */ - for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { + for (block_id = 0; block_id < NUM_PHYS_BLOCKS; block_id++) { if (dev_data->block_in_reset[block_id] && dump) continue; - attn_reg_arr = qed_get_block_attn_regs((enum block_id)block_id, + attn_reg_arr = qed_get_block_attn_regs(p_hwfn, + (enum block_id)block_id, ATTN_TYPE_PARITY, &num_attn_regs); @@ -2980,16 +2292,29 @@ static u32 qed_grc_dump_modified_regs(struct qed_hwfn *p_hwfn, } } + /* Overwrite header for attention registers */ + if (dump) + qed_grc_dump_regs_hdr(dump_buf, + true, + num_reg_entries, + SPLIT_TYPE_NONE, 0, "ATTN_REGS"); + + /* Write empty header for stall registers */ + stall_regs_offset = offset; + offset += qed_grc_dump_regs_hdr(dump_buf, + false, 0, SPLIT_TYPE_NONE, 0, "REGS"); + /* Write Storm stall status registers */ - for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) { + for (storm_id = 0, num_reg_entries = 0; storm_id < MAX_DBG_STORMS; + storm_id++) { struct storm_defs *storm = &s_storm_defs[storm_id]; u32 addr; - if (dev_data->block_in_reset[storm->block_id] && dump) + if (dev_data->block_in_reset[storm->sem_block_id] && dump) continue; addr = - BYTES_TO_DWORDS(s_storm_defs[storm_id].sem_fast_mem_addr + + BYTES_TO_DWORDS(storm->sem_fast_mem_addr + SEM_FAST_REG_STALLED); offset += qed_grc_dump_reg_entry(p_hwfn, p_ptt, @@ -3001,12 +2326,12 @@ static u32 qed_grc_dump_modified_regs(struct qed_hwfn *p_hwfn, num_reg_entries++; } - /* Write header */ + /* Overwrite header for stall registers */ if (dump) - qed_grc_dump_regs_hdr(dump_buf, + qed_grc_dump_regs_hdr(dump_buf + stall_regs_offset, true, - num_reg_entries, SPLIT_TYPE_NONE, - 0, NULL, NULL); + num_reg_entries, + SPLIT_TYPE_NONE, 0, "REGS"); return offset; } @@ -3019,8 +2344,7 @@ static u32 qed_grc_dump_special_regs(struct qed_hwfn *p_hwfn, u32 offset = 0, addr; offset += qed_grc_dump_regs_hdr(dump_buf, - dump, 2, SPLIT_TYPE_NONE, 0, - NULL, NULL); + dump, 2, SPLIT_TYPE_NONE, 0, "REGS"); /* Dump R/TDIF_REG_DEBUG_ERROR_INFO_SIZE (every 8'th register should be * skipped). @@ -3068,8 +2392,7 @@ static u32 qed_grc_dump_mem_hdr(struct qed_hwfn *p_hwfn, u32 len, u32 bit_width, bool packed, - const char *mem_group, - bool is_storm, char storm_letter) + const char *mem_group, char storm_letter) { u8 num_params = 3; u32 offset = 0; @@ -3090,7 +2413,7 @@ static u32 qed_grc_dump_mem_hdr(struct qed_hwfn *p_hwfn, if (name) { /* Dump name */ - if (is_storm) { + if (storm_letter) { strcpy(buf, "?STORM_"); buf[0] = storm_letter; strcpy(buf + strlen(buf), name); @@ -3122,7 +2445,7 @@ static u32 qed_grc_dump_mem_hdr(struct qed_hwfn *p_hwfn, dump, "packed", 1); /* Dump reg type */ - if (is_storm) { + if (storm_letter) { strcpy(buf, "?STORM_"); buf[0] = storm_letter; strcpy(buf + strlen(buf), mem_group); @@ -3149,8 +2472,7 @@ static u32 qed_grc_dump_mem(struct qed_hwfn *p_hwfn, bool wide_bus, u32 bit_width, bool packed, - const char *mem_group, - bool is_storm, char storm_letter) + const char *mem_group, char storm_letter) { u32 offset = 0; @@ -3161,8 +2483,7 @@ static u32 qed_grc_dump_mem(struct qed_hwfn *p_hwfn, addr, len, bit_width, - packed, - mem_group, is_storm, storm_letter); + packed, mem_group, storm_letter); offset += qed_grc_dump_addr_range(p_hwfn, p_ptt, dump_buf + offset, @@ -3175,20 +2496,21 @@ static u32 qed_grc_dump_mem(struct qed_hwfn *p_hwfn, /* Dumps GRC memories entries. Returns the dumped size in dwords. */ static u32 qed_grc_dump_mem_entries(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - struct dbg_array input_mems_arr, + struct virt_mem_desc input_mems_arr, u32 *dump_buf, bool dump) { u32 i, offset = 0, input_offset = 0; bool mode_match = true; - while (input_offset < input_mems_arr.size_in_dwords) { + while (input_offset < BYTES_TO_DWORDS(input_mems_arr.size)) { const struct dbg_dump_cond_hdr *cond_hdr; u16 modes_buf_offset; u32 num_entries; bool eval_mode; - cond_hdr = (const struct dbg_dump_cond_hdr *) - &input_mems_arr.ptr[input_offset++]; + cond_hdr = + (const struct dbg_dump_cond_hdr *)input_mems_arr.ptr + + input_offset++; num_entries = cond_hdr->data_size / MEM_DUMP_ENTRY_SIZE_DWORDS; /* Check required mode */ @@ -3210,24 +2532,25 @@ static u32 qed_grc_dump_mem_entries(struct qed_hwfn *p_hwfn, for (i = 0; i < num_entries; i++, input_offset += MEM_DUMP_ENTRY_SIZE_DWORDS) { const struct dbg_dump_mem *mem = - (const struct dbg_dump_mem *) - &input_mems_arr.ptr[input_offset]; - u8 mem_group_id = GET_FIELD(mem->dword0, - DBG_DUMP_MEM_MEM_GROUP_ID); - bool is_storm = false, mem_wide_bus; - enum dbg_grc_params grc_param; - char storm_letter = 'a'; - enum block_id block_id; + (const struct dbg_dump_mem *)((u32 *) + input_mems_arr.ptr + + input_offset); + const struct dbg_block *block; + char storm_letter = 0; u32 mem_addr, mem_len; + bool mem_wide_bus; + u8 mem_group_id; + mem_group_id = GET_FIELD(mem->dword0, + DBG_DUMP_MEM_MEM_GROUP_ID); if (mem_group_id >= MEM_GROUPS_NUM) { DP_NOTICE(p_hwfn, "Invalid mem_group_id\n"); return 0; } - block_id = (enum block_id)cond_hdr->block_id; if (!qed_grc_is_mem_included(p_hwfn, - block_id, + (enum block_id) + cond_hdr->block_id, mem_group_id)) continue; @@ -3236,42 +2559,14 @@ static u32 qed_grc_dump_mem_entries(struct qed_hwfn *p_hwfn, mem_wide_bus = GET_FIELD(mem->dword1, DBG_DUMP_MEM_WIDE_BUS); - /* Update memory length for CCFC/TCFC memories - * according to number of LCIDs/LTIDs. - */ - if (mem_group_id == MEM_GROUP_CONN_CFC_MEM) { - if (mem_len % MAX_LCIDS) { - DP_NOTICE(p_hwfn, - "Invalid CCFC connection memory size\n"); - return 0; - } - - grc_param = DBG_GRC_PARAM_NUM_LCIDS; - mem_len = qed_grc_get_param(p_hwfn, grc_param) * - (mem_len / MAX_LCIDS); - } else if (mem_group_id == MEM_GROUP_TASK_CFC_MEM) { - if (mem_len % MAX_LTIDS) { - DP_NOTICE(p_hwfn, - "Invalid TCFC task memory size\n"); - return 0; - } - - grc_param = DBG_GRC_PARAM_NUM_LTIDS; - mem_len = qed_grc_get_param(p_hwfn, grc_param) * - (mem_len / MAX_LTIDS); - } + block = get_dbg_block(p_hwfn, + cond_hdr->block_id); - /* If memory is associated with Storm, update Storm - * details. + /* If memory is associated with Storm, + * update storm details */ - if (s_block_defs - [cond_hdr->block_id]->associated_to_storm) { - is_storm = true; - storm_letter = - s_storm_defs[s_block_defs - [cond_hdr->block_id]-> - storm_id].letter; - } + if (block->associated_storm_letter) + storm_letter = block->associated_storm_letter; /* Dump memory */ offset += qed_grc_dump_mem(p_hwfn, @@ -3285,7 +2580,6 @@ static u32 qed_grc_dump_mem_entries(struct qed_hwfn *p_hwfn, 0, false, s_mem_group_names[mem_group_id], - is_storm, storm_letter); } } @@ -3300,26 +2594,25 @@ static u32 qed_grc_dump_memories(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { + struct virt_mem_desc *dbg_buf = + &p_hwfn->dbg_arrays[BIN_BUF_DBG_DUMP_MEM]; u32 offset = 0, input_offset = 0; - while (input_offset < - s_dbg_arrays[BIN_BUF_DBG_DUMP_MEM].size_in_dwords) { + while (input_offset < BYTES_TO_DWORDS(dbg_buf->size)) { const struct dbg_dump_split_hdr *split_hdr; - struct dbg_array curr_input_mems_arr; + struct virt_mem_desc curr_input_mems_arr; enum init_split_types split_type; u32 split_data_size; - split_hdr = (const struct dbg_dump_split_hdr *) - &s_dbg_arrays[BIN_BUF_DBG_DUMP_MEM].ptr[input_offset++]; - split_type = - GET_FIELD(split_hdr->hdr, - DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID); - split_data_size = - GET_FIELD(split_hdr->hdr, - DBG_DUMP_SPLIT_HDR_DATA_SIZE); - curr_input_mems_arr.ptr = - &s_dbg_arrays[BIN_BUF_DBG_DUMP_MEM].ptr[input_offset]; - curr_input_mems_arr.size_in_dwords = split_data_size; + split_hdr = + (const struct dbg_dump_split_hdr *)dbg_buf->ptr + + input_offset++; + split_type = GET_FIELD(split_hdr->hdr, + DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID); + split_data_size = GET_FIELD(split_hdr->hdr, + DBG_DUMP_SPLIT_HDR_DATA_SIZE); + curr_input_mems_arr.ptr = (u32 *)dbg_buf->ptr + input_offset; + curr_input_mems_arr.size = DWORDS_TO_BYTES(split_data_size); if (split_type == SPLIT_TYPE_NONE) offset += qed_grc_dump_mem_entries(p_hwfn, @@ -3347,17 +2640,19 @@ static u32 qed_grc_dump_ctx_data(struct qed_hwfn *p_hwfn, bool dump, const char *name, u32 num_lids, - u32 lid_size, - u32 rd_reg_addr, - u8 storm_id) + enum cm_ctx_types ctx_type, u8 storm_id) { + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; struct storm_defs *storm = &s_storm_defs[storm_id]; - u32 i, lid, total_size, offset = 0; + u32 i, lid, lid_size, total_size; + u32 rd_reg_addr, offset = 0; + + /* Convert quad-regs to dwords */ + lid_size = storm->cm_ctx_lid_sizes[dev_data->chip_id][ctx_type] * 4; if (!lid_size) return 0; - lid_size *= BYTES_IN_DWORD; total_size = num_lids * lid_size; offset += qed_grc_dump_mem_hdr(p_hwfn, @@ -3367,18 +2662,26 @@ static u32 qed_grc_dump_ctx_data(struct qed_hwfn *p_hwfn, 0, total_size, lid_size * 32, - false, name, true, storm->letter); + false, name, storm->letter); if (!dump) return offset + total_size; + rd_reg_addr = BYTES_TO_DWORDS(storm->cm_ctx_rd_addr[ctx_type]); + /* Dump context data */ for (lid = 0; lid < num_lids; lid++) { - for (i = 0; i < lid_size; i++, offset++) { + for (i = 0; i < lid_size; i++) { qed_wr(p_hwfn, p_ptt, storm->cm_ctx_wr_addr, (i << 9) | lid); - *(dump_buf + offset) = qed_rd(p_hwfn, - p_ptt, rd_reg_addr); + offset += qed_grc_dump_addr_range(p_hwfn, + p_ptt, + dump_buf + offset, + dump, + rd_reg_addr, + 1, + false, + SPLIT_TYPE_NONE, 0); } } @@ -3389,115 +2692,126 @@ static u32 qed_grc_dump_ctx_data(struct qed_hwfn *p_hwfn, static u32 qed_grc_dump_ctx(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { - enum dbg_grc_params grc_param; u32 offset = 0; u8 storm_id; for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) { - struct storm_defs *storm = &s_storm_defs[storm_id]; - if (!qed_grc_is_storm_included(p_hwfn, (enum dbg_storms)storm_id)) continue; /* Dump Conn AG context size */ - grc_param = DBG_GRC_PARAM_NUM_LCIDS; - offset += - qed_grc_dump_ctx_data(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - "CONN_AG_CTX", - qed_grc_get_param(p_hwfn, - grc_param), - storm->cm_conn_ag_ctx_lid_size, - storm->cm_conn_ag_ctx_rd_addr, - storm_id); + offset += qed_grc_dump_ctx_data(p_hwfn, + p_ptt, + dump_buf + offset, + dump, + "CONN_AG_CTX", + NUM_OF_LCIDS, + CM_CTX_CONN_AG, storm_id); /* Dump Conn ST context size */ - grc_param = DBG_GRC_PARAM_NUM_LCIDS; - offset += - qed_grc_dump_ctx_data(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - "CONN_ST_CTX", - qed_grc_get_param(p_hwfn, - grc_param), - storm->cm_conn_st_ctx_lid_size, - storm->cm_conn_st_ctx_rd_addr, - storm_id); + offset += qed_grc_dump_ctx_data(p_hwfn, + p_ptt, + dump_buf + offset, + dump, + "CONN_ST_CTX", + NUM_OF_LCIDS, + CM_CTX_CONN_ST, storm_id); /* Dump Task AG context size */ - grc_param = DBG_GRC_PARAM_NUM_LTIDS; - offset += - qed_grc_dump_ctx_data(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - "TASK_AG_CTX", - qed_grc_get_param(p_hwfn, - grc_param), - storm->cm_task_ag_ctx_lid_size, - storm->cm_task_ag_ctx_rd_addr, - storm_id); + offset += qed_grc_dump_ctx_data(p_hwfn, + p_ptt, + dump_buf + offset, + dump, + "TASK_AG_CTX", + NUM_OF_LTIDS, + CM_CTX_TASK_AG, storm_id); /* Dump Task ST context size */ - grc_param = DBG_GRC_PARAM_NUM_LTIDS; - offset += - qed_grc_dump_ctx_data(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - "TASK_ST_CTX", - qed_grc_get_param(p_hwfn, - grc_param), - storm->cm_task_st_ctx_lid_size, - storm->cm_task_st_ctx_rd_addr, - storm_id); + offset += qed_grc_dump_ctx_data(p_hwfn, + p_ptt, + dump_buf + offset, + dump, + "TASK_ST_CTX", + NUM_OF_LTIDS, + CM_CTX_TASK_ST, storm_id); } return offset; } -/* Dumps GRC IORs data. Returns the dumped size in dwords. */ -static u32 qed_grc_dump_iors(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) -{ - char buf[10] = "IOR_SET_?"; - u32 addr, offset = 0; - u8 storm_id, set_id; +#define VFC_STATUS_RESP_READY_BIT 0 +#define VFC_STATUS_BUSY_BIT 1 +#define VFC_STATUS_SENDING_CMD_BIT 2 - for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) { - struct storm_defs *storm = &s_storm_defs[storm_id]; +#define VFC_POLLING_DELAY_MS 1 +#define VFC_POLLING_COUNT 20 - if (!qed_grc_is_storm_included(p_hwfn, - (enum dbg_storms)storm_id)) - continue; +/* Reads data from VFC. Returns the number of dwords read (0 on error). + * Sizes are specified in dwords. + */ +static u32 qed_grc_dump_read_from_vfc(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + struct storm_defs *storm, + u32 *cmd_data, + u32 cmd_size, + u32 *addr_data, + u32 addr_size, + u32 resp_size, u32 *dump_buf) +{ + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; + u32 vfc_status, polling_ms, polling_count = 0, i; + u32 reg_addr, sem_base; + bool is_ready = false; + + sem_base = storm->sem_fast_mem_addr; + polling_ms = VFC_POLLING_DELAY_MS * + s_hw_type_defs[dev_data->hw_type].delay_factor; + + /* Write VFC command */ + ARR_REG_WR(p_hwfn, + p_ptt, + sem_base + SEM_FAST_REG_VFC_DATA_WR, + cmd_data, cmd_size); + + /* Write VFC address */ + ARR_REG_WR(p_hwfn, + p_ptt, + sem_base + SEM_FAST_REG_VFC_ADDR, + addr_data, addr_size); + + /* Read response */ + for (i = 0; i < resp_size; i++) { + /* Poll until ready */ + do { + reg_addr = sem_base + SEM_FAST_REG_VFC_STATUS; + qed_grc_dump_addr_range(p_hwfn, + p_ptt, + &vfc_status, + true, + BYTES_TO_DWORDS(reg_addr), + 1, + false, SPLIT_TYPE_NONE, 0); + is_ready = vfc_status & BIT(VFC_STATUS_RESP_READY_BIT); + + if (!is_ready) { + if (polling_count++ == VFC_POLLING_COUNT) + return 0; - for (set_id = 0; set_id < NUM_IOR_SETS; set_id++) { - addr = BYTES_TO_DWORDS(storm->sem_fast_mem_addr + - SEM_FAST_REG_STORM_REG_FILE) + - IOR_SET_OFFSET(set_id); - if (strlen(buf) > 0) - buf[strlen(buf) - 1] = '0' + set_id; - offset += qed_grc_dump_mem(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - buf, - addr, - IORS_PER_SET, - false, - 32, - false, - "ior", - true, - storm->letter); - } + msleep(polling_ms); + } + } while (!is_ready); + + reg_addr = sem_base + SEM_FAST_REG_VFC_DATA_RD; + qed_grc_dump_addr_range(p_hwfn, + p_ptt, + dump_buf + i, + true, + BYTES_TO_DWORDS(reg_addr), + 1, false, SPLIT_TYPE_NONE, 0); } - return offset; + return resp_size; } /* Dump VFC CAM. Returns the dumped size in dwords. */ @@ -3509,7 +2823,7 @@ static u32 qed_grc_dump_vfc_cam(struct qed_hwfn *p_hwfn, struct storm_defs *storm = &s_storm_defs[storm_id]; u32 cam_addr[VFC_CAM_ADDR_DWORDS] = { 0 }; u32 cam_cmd[VFC_CAM_CMD_DWORDS] = { 0 }; - u32 row, i, offset = 0; + u32 row, offset = 0; offset += qed_grc_dump_mem_hdr(p_hwfn, dump_buf + offset, @@ -3518,7 +2832,7 @@ static u32 qed_grc_dump_vfc_cam(struct qed_hwfn *p_hwfn, 0, total_size, 256, - false, "vfc_cam", true, storm->letter); + false, "vfc_cam", storm->letter); if (!dump) return offset + total_size; @@ -3526,26 +2840,18 @@ static u32 qed_grc_dump_vfc_cam(struct qed_hwfn *p_hwfn, /* Prepare CAM address */ SET_VAR_FIELD(cam_addr, VFC_CAM_ADDR, OP, VFC_OPCODE_CAM_RD); - for (row = 0; row < VFC_CAM_NUM_ROWS; - row++, offset += VFC_CAM_RESP_DWORDS) { - /* Write VFC CAM command */ + /* Read VFC CAM data */ + for (row = 0; row < VFC_CAM_NUM_ROWS; row++) { SET_VAR_FIELD(cam_cmd, VFC_CAM_CMD, ROW, row); - ARR_REG_WR(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_DATA_WR, - cam_cmd, VFC_CAM_CMD_DWORDS); - - /* Write VFC CAM address */ - ARR_REG_WR(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_ADDR, - cam_addr, VFC_CAM_ADDR_DWORDS); - - /* Read VFC CAM read response */ - ARR_REG_RD(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_DATA_RD, - dump_buf + offset, VFC_CAM_RESP_DWORDS); + offset += qed_grc_dump_read_from_vfc(p_hwfn, + p_ptt, + storm, + cam_cmd, + VFC_CAM_CMD_DWORDS, + cam_addr, + VFC_CAM_ADDR_DWORDS, + VFC_CAM_RESP_DWORDS, + dump_buf + offset); } return offset; @@ -3562,7 +2868,7 @@ static u32 qed_grc_dump_vfc_ram(struct qed_hwfn *p_hwfn, struct storm_defs *storm = &s_storm_defs[storm_id]; u32 ram_addr[VFC_RAM_ADDR_DWORDS] = { 0 }; u32 ram_cmd[VFC_RAM_CMD_DWORDS] = { 0 }; - u32 row, i, offset = 0; + u32 row, offset = 0; offset += qed_grc_dump_mem_hdr(p_hwfn, dump_buf + offset, @@ -3573,35 +2879,27 @@ static u32 qed_grc_dump_vfc_ram(struct qed_hwfn *p_hwfn, 256, false, ram_defs->type_name, - true, storm->letter); - - /* Prepare RAM address */ - SET_VAR_FIELD(ram_addr, VFC_RAM_ADDR, OP, VFC_OPCODE_RAM_RD); + storm->letter); if (!dump) return offset + total_size; + /* Prepare RAM address */ + SET_VAR_FIELD(ram_addr, VFC_RAM_ADDR, OP, VFC_OPCODE_RAM_RD); + + /* Read VFC RAM data */ for (row = ram_defs->base_row; - row < ram_defs->base_row + ram_defs->num_rows; - row++, offset += VFC_RAM_RESP_DWORDS) { - /* Write VFC RAM command */ - ARR_REG_WR(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_DATA_WR, - ram_cmd, VFC_RAM_CMD_DWORDS); - - /* Write VFC RAM address */ + row < ram_defs->base_row + ram_defs->num_rows; row++) { SET_VAR_FIELD(ram_addr, VFC_RAM_ADDR, ROW, row); - ARR_REG_WR(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_ADDR, - ram_addr, VFC_RAM_ADDR_DWORDS); - - /* Read VFC RAM read response */ - ARR_REG_RD(p_hwfn, - p_ptt, - storm->sem_fast_mem_addr + SEM_FAST_REG_VFC_DATA_RD, - dump_buf + offset, VFC_RAM_RESP_DWORDS); + offset += qed_grc_dump_read_from_vfc(p_hwfn, + p_ptt, + storm, + ram_cmd, + VFC_RAM_CMD_DWORDS, + ram_addr, + VFC_RAM_ADDR_DWORDS, + VFC_RAM_RESP_DWORDS, + dump_buf + offset); } return offset; @@ -3611,16 +2909,13 @@ static u32 qed_grc_dump_vfc_ram(struct qed_hwfn *p_hwfn, static u32 qed_grc_dump_vfc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { - struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; u8 storm_id, i; u32 offset = 0; for (storm_id = 0; storm_id < MAX_DBG_STORMS; storm_id++) { if (!qed_grc_is_storm_included(p_hwfn, (enum dbg_storms)storm_id) || - !s_storm_defs[storm_id].has_vfc || - (storm_id == DBG_PSTORM_ID && dev_data->platform_id != - PLATFORM_ASIC)) + !s_storm_defs[storm_id].has_vfc) continue; /* Read CAM */ @@ -3670,7 +2965,7 @@ static u32 qed_grc_dump_rss(struct qed_hwfn *p_hwfn, total_dwords, rss_defs->entry_width, packed, - rss_defs->type_name, false, 0); + rss_defs->type_name, 0); /* Dump RSS data */ if (!dump) { @@ -3730,7 +3025,7 @@ static u32 qed_grc_dump_big_ram(struct qed_hwfn *p_hwfn, 0, ram_size, block_size * 8, - false, type_name, false, 0); + false, type_name, 0); /* Read and dump Big RAM data */ if (!dump) @@ -3756,6 +3051,7 @@ static u32 qed_grc_dump_big_ram(struct qed_hwfn *p_hwfn, return offset; } +/* Dumps MCP scratchpad. Returns the dumped size in dwords. */ static u32 qed_grc_dump_mcp(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { @@ -3777,8 +3073,8 @@ static u32 qed_grc_dump_mcp(struct qed_hwfn *p_hwfn, dump, NULL, BYTES_TO_DWORDS(MCP_REG_SCRATCH), - MCP_REG_SCRATCH_SIZE_BB_K2, - false, 0, false, "MCP", false, 0); + MCP_REG_SCRATCH_SIZE, + false, 0, false, "MCP", 0); /* Dump MCP cpu_reg_file */ offset += qed_grc_dump_mem(p_hwfn, @@ -3788,19 +3084,19 @@ static u32 qed_grc_dump_mcp(struct qed_hwfn *p_hwfn, NULL, BYTES_TO_DWORDS(MCP_REG_CPU_REG_FILE), MCP_REG_CPU_REG_FILE_SIZE, - false, 0, false, "MCP", false, 0); + false, 0, false, "MCP", 0); /* Dump MCP registers */ block_enable[BLOCK_MCP] = true; offset += qed_grc_dump_registers(p_hwfn, p_ptt, dump_buf + offset, - dump, block_enable, "block", "MCP"); + dump, block_enable, "MCP"); /* Dump required non-MCP registers */ offset += qed_grc_dump_regs_hdr(dump_buf + offset, dump, 1, SPLIT_TYPE_NONE, 0, - "block", "MCP"); + "MCP"); addr = BYTES_TO_DWORDS(MISC_REG_SHARED_MEM_ADDR); offset += qed_grc_dump_reg_entry(p_hwfn, p_ptt, @@ -3817,7 +3113,9 @@ static u32 qed_grc_dump_mcp(struct qed_hwfn *p_hwfn, return offset; } -/* Dumps the tbus indirect memory for all PHYs. */ +/* Dumps the tbus indirect memory for all PHYs. + * Returns the dumped size in dwords. + */ static u32 qed_grc_dump_phy(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { @@ -3851,7 +3149,7 @@ static u32 qed_grc_dump_phy(struct qed_hwfn *p_hwfn, mem_name, 0, PHY_DUMP_SIZE_DWORDS, - 16, true, mem_name, false, 0); + 16, true, mem_name, 0); if (!dump) { offset += PHY_DUMP_SIZE_DWORDS; @@ -3882,21 +3180,58 @@ static u32 qed_grc_dump_phy(struct qed_hwfn *p_hwfn, return offset; } -static void qed_config_dbg_line(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - enum block_id block_id, - u8 line_id, - u8 enable_mask, - u8 right_shift, - u8 force_valid_mask, u8 force_frame_mask) +static enum dbg_status qed_find_nvram_image(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 image_type, + u32 *nvram_offset_bytes, + u32 *nvram_size_bytes); + +static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 nvram_offset_bytes, + u32 nvram_size_bytes, u32 *ret_buf); + +/* Dumps the MCP HW dump from NVRAM. Returns the dumped size in dwords. */ +static u32 qed_grc_dump_mcp_hw_dump(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt, + u32 *dump_buf, bool dump) { - struct block_defs *block = s_block_defs[block_id]; + u32 hw_dump_offset_bytes = 0, hw_dump_size_bytes = 0; + u32 hw_dump_size_dwords = 0, offset = 0; + enum dbg_status status; + + /* Read HW dump image from NVRAM */ + status = qed_find_nvram_image(p_hwfn, + p_ptt, + NVM_TYPE_HW_DUMP_OUT, + &hw_dump_offset_bytes, + &hw_dump_size_bytes); + if (status != DBG_STATUS_OK) + return 0; + + hw_dump_size_dwords = BYTES_TO_DWORDS(hw_dump_size_bytes); + + /* Dump HW dump image section */ + offset += qed_dump_section_hdr(dump_buf + offset, + dump, "mcp_hw_dump", 1); + offset += qed_dump_num_param(dump_buf + offset, + dump, "size", hw_dump_size_dwords); + + /* Read MCP HW dump image into dump buffer */ + if (dump && hw_dump_size_dwords) { + status = qed_nvram_read(p_hwfn, + p_ptt, + hw_dump_offset_bytes, + hw_dump_size_bytes, dump_buf + offset); + if (status != DBG_STATUS_OK) { + DP_NOTICE(p_hwfn, + "Failed to read MCP HW Dump image from NVRAM\n"); + return 0; + } + } + offset += hw_dump_size_dwords; - qed_wr(p_hwfn, p_ptt, block->dbg_select_addr, line_id); - qed_wr(p_hwfn, p_ptt, block->dbg_enable_addr, enable_mask); - qed_wr(p_hwfn, p_ptt, block->dbg_shift_addr, right_shift); - qed_wr(p_hwfn, p_ptt, block->dbg_force_valid_addr, force_valid_mask); - qed_wr(p_hwfn, p_ptt, block->dbg_force_frame_addr, force_frame_mask); + return offset; } /* Dumps Static Debug data. Returns the dumped size in dwords. */ @@ -3905,26 +3240,19 @@ static u32 qed_grc_dump_static_debug(struct qed_hwfn *p_hwfn, u32 *dump_buf, bool dump) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - u32 block_id, line_id, offset = 0; + u32 block_id, line_id, offset = 0, addr, len; /* Don't dump static debug if a debug bus recording is in progress */ if (dump && qed_rd(p_hwfn, p_ptt, DBG_REG_DBG_BLOCK_ON)) return 0; if (dump) { - /* Disable all blocks debug output */ - for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { - struct block_defs *block = s_block_defs[block_id]; - - if (block->dbg_client_id[dev_data->chip_id] != - MAX_DBG_BUS_CLIENTS) - qed_wr(p_hwfn, p_ptt, block->dbg_enable_addr, - 0); - } + /* Disable debug bus in all blocks */ + qed_bus_disable_blocks(p_hwfn, p_ptt); qed_bus_reset_dbg_block(p_hwfn, p_ptt); - qed_bus_set_framing_mode(p_hwfn, - p_ptt, DBG_BUS_FRAME_MODE_8HW_0ST); + qed_wr(p_hwfn, + p_ptt, DBG_REG_FRAMING_MODE, DBG_BUS_FRAME_MODE_8HW); qed_wr(p_hwfn, p_ptt, DBG_REG_DEBUG_TARGET, DBG_BUS_TARGET_ID_INT_BUF); qed_wr(p_hwfn, p_ptt, DBG_REG_FULL_MODE, 1); @@ -3933,28 +3261,48 @@ static u32 qed_grc_dump_static_debug(struct qed_hwfn *p_hwfn, /* Dump all static debug lines for each relevant block */ for (block_id = 0; block_id < MAX_BLOCK_ID; block_id++) { - struct block_defs *block = s_block_defs[block_id]; - struct dbg_bus_block *block_desc; - u32 block_dwords, addr, len; - u8 dbg_client_id; + const struct dbg_block_chip *block_per_chip; + const struct dbg_block *block; + bool is_removed, has_dbg_bus; + u16 modes_buf_offset; + u32 block_dwords; + + block_per_chip = + qed_get_dbg_block_per_chip(p_hwfn, (enum block_id)block_id); + is_removed = GET_FIELD(block_per_chip->flags, + DBG_BLOCK_CHIP_IS_REMOVED); + has_dbg_bus = GET_FIELD(block_per_chip->flags, + DBG_BLOCK_CHIP_HAS_DBG_BUS); + + /* read+clear for NWS parity is not working, skip NWS block */ + if (block_id == BLOCK_NWS) + continue; + + if (!is_removed && has_dbg_bus && + GET_FIELD(block_per_chip->dbg_bus_mode.data, + DBG_MODE_HDR_EVAL_MODE) > 0) { + modes_buf_offset = + GET_FIELD(block_per_chip->dbg_bus_mode.data, + DBG_MODE_HDR_MODES_BUF_OFFSET); + if (!qed_is_mode_match(p_hwfn, &modes_buf_offset)) + has_dbg_bus = false; + } - if (block->dbg_client_id[dev_data->chip_id] == - MAX_DBG_BUS_CLIENTS) + if (is_removed || !has_dbg_bus) continue; - block_desc = get_dbg_bus_block_desc(p_hwfn, - (enum block_id)block_id); - block_dwords = NUM_DBG_LINES(block_desc) * + block_dwords = NUM_DBG_LINES(block_per_chip) * STATIC_DEBUG_LINE_DWORDS; /* Dump static section params */ + block = get_dbg_block(p_hwfn, (enum block_id)block_id); offset += qed_grc_dump_mem_hdr(p_hwfn, dump_buf + offset, dump, block->name, 0, block_dwords, - 32, false, "STATIC", false, 0); + 32, false, "STATIC", 0); if (!dump) { offset += block_dwords; @@ -3970,20 +3318,19 @@ static u32 qed_grc_dump_static_debug(struct qed_hwfn *p_hwfn, } /* Enable block's client */ - dbg_client_id = block->dbg_client_id[dev_data->chip_id]; qed_bus_enable_clients(p_hwfn, p_ptt, - BIT(dbg_client_id)); + BIT(block_per_chip->dbg_client_id)); addr = BYTES_TO_DWORDS(DBG_REG_CALENDAR_OUT_DATA); len = STATIC_DEBUG_LINE_DWORDS; - for (line_id = 0; line_id < (u32)NUM_DBG_LINES(block_desc); + for (line_id = 0; line_id < (u32)NUM_DBG_LINES(block_per_chip); line_id++) { /* Configure debug line ID */ - qed_config_dbg_line(p_hwfn, - p_ptt, - (enum block_id)block_id, - (u8)line_id, 0xf, 0, 0, 0); + qed_bus_config_dbg_line(p_hwfn, + p_ptt, + (enum block_id)block_id, + (u8)line_id, 0xf, 0, 0, 0); /* Read debug line info */ offset += qed_grc_dump_addr_range(p_hwfn, @@ -3998,7 +3345,8 @@ static u32 qed_grc_dump_static_debug(struct qed_hwfn *p_hwfn, /* Disable block's client and debug output */ qed_bus_enable_clients(p_hwfn, p_ptt, 0); - qed_wr(p_hwfn, p_ptt, block->dbg_enable_addr, 0); + qed_bus_config_dbg_line(p_hwfn, p_ptt, + (enum block_id)block_id, 0, 0, 0, 0, 0); } if (dump) { @@ -4018,8 +3366,8 @@ static enum dbg_status qed_grc_dump(struct qed_hwfn *p_hwfn, bool dump, u32 *num_dumped_dwords) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; + u32 dwords_read, offset = 0; bool parities_masked = false; - u32 offset = 0; u8 i; *num_dumped_dwords = 0; @@ -4038,13 +3386,11 @@ static enum dbg_status qed_grc_dump(struct qed_hwfn *p_hwfn, offset += qed_dump_num_param(dump_buf + offset, dump, "num-lcids", - qed_grc_get_param(p_hwfn, - DBG_GRC_PARAM_NUM_LCIDS)); + NUM_OF_LCIDS); offset += qed_dump_num_param(dump_buf + offset, dump, "num-ltids", - qed_grc_get_param(p_hwfn, - DBG_GRC_PARAM_NUM_LTIDS)); + NUM_OF_LTIDS); offset += qed_dump_num_param(dump_buf + offset, dump, "num-ports", dev_data->num_ports); @@ -4056,7 +3402,7 @@ static enum dbg_status qed_grc_dump(struct qed_hwfn *p_hwfn, /* Take all blocks out of reset (using reset registers) */ if (dump) { - qed_grc_unreset_blocks(p_hwfn, p_ptt); + qed_grc_unreset_blocks(p_hwfn, p_ptt, false); qed_update_blocks_reset_state(p_hwfn, p_ptt); } @@ -4099,7 +3445,7 @@ static enum dbg_status qed_grc_dump(struct qed_hwfn *p_hwfn, dump_buf + offset, dump, - block_enable, NULL, NULL); + block_enable, NULL); /* Dump special registers */ offset += qed_grc_dump_special_regs(p_hwfn, @@ -4133,23 +3479,29 @@ static enum dbg_status qed_grc_dump(struct qed_hwfn *p_hwfn, dump_buf + offset, dump, i); - /* Dump IORs */ - if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_IOR)) - offset += qed_grc_dump_iors(p_hwfn, - p_ptt, dump_buf + offset, dump); - /* Dump VFC */ - if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_VFC)) - offset += qed_grc_dump_vfc(p_hwfn, - p_ptt, dump_buf + offset, dump); + if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_VFC)) { + dwords_read = qed_grc_dump_vfc(p_hwfn, + p_ptt, dump_buf + offset, dump); + offset += dwords_read; + if (!dwords_read) + return DBG_STATUS_VFC_READ_ERROR; + } /* Dump PHY tbus */ if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_PHY) && dev_data->chip_id == - CHIP_K2 && dev_data->platform_id == PLATFORM_ASIC) + CHIP_K2 && dev_data->hw_type == HW_TYPE_ASIC) offset += qed_grc_dump_phy(p_hwfn, p_ptt, dump_buf + offset, dump); + /* Dump MCP HW Dump */ + if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_MCP_HW_DUMP) && + !qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_NO_MCP) && 1) + offset += qed_grc_dump_mcp_hw_dump(p_hwfn, + p_ptt, + dump_buf + offset, dump); + /* Dump static debug data (only if not during debug bus recording) */ if (qed_grc_is_included(p_hwfn, DBG_GRC_PARAM_DUMP_STATIC) && @@ -4200,8 +3552,9 @@ static u32 qed_idle_chk_dump_failure(struct qed_hwfn *p_hwfn, u8 reg_id; hdr = (struct dbg_idle_chk_result_hdr *)dump_buf; - regs = &((const union dbg_idle_chk_reg *) - s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr)[rule->reg_offset]; + regs = (const union dbg_idle_chk_reg *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr + + rule->reg_offset; cond_regs = ®s[0].cond_reg; info_regs = ®s[rule->num_cond_regs].info_reg; @@ -4221,8 +3574,8 @@ static u32 qed_idle_chk_dump_failure(struct qed_hwfn *p_hwfn, const struct dbg_idle_chk_cond_reg *reg = &cond_regs[reg_id]; struct dbg_idle_chk_result_reg_hdr *reg_hdr; - reg_hdr = (struct dbg_idle_chk_result_reg_hdr *) - (dump_buf + offset); + reg_hdr = + (struct dbg_idle_chk_result_reg_hdr *)(dump_buf + offset); /* Write register header */ if (!dump) { @@ -4339,12 +3692,13 @@ qed_idle_chk_dump_rule_entries(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, const u32 *imm_values; rule = &input_rules[i]; - regs = &((const union dbg_idle_chk_reg *) - s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr) - [rule->reg_offset]; + regs = (const union dbg_idle_chk_reg *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr + + rule->reg_offset; cond_regs = ®s[0].cond_reg; - imm_values = &s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_IMMS].ptr - [rule->imm_offset]; + imm_values = + (u32 *)p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_IMMS].ptr + + rule->imm_offset; /* Check if all condition register blocks are out of reset, and * find maximal number of entries (all condition registers that @@ -4462,10 +3816,12 @@ qed_idle_chk_dump_rule_entries(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, static u32 qed_idle_chk_dump(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *dump_buf, bool dump) { - u32 num_failing_rules_offset, offset = 0, input_offset = 0; - u32 num_failing_rules = 0; + struct virt_mem_desc *dbg_buf = + &p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES]; + u32 num_failing_rules_offset, offset = 0, + input_offset = 0, num_failing_rules = 0; - /* Dump global params */ + /* Dump global params - 1 must match below amount of params */ offset += qed_dump_common_global_params(p_hwfn, p_ptt, dump_buf + offset, dump, 1); @@ -4477,12 +3833,10 @@ static u32 qed_idle_chk_dump(struct qed_hwfn *p_hwfn, num_failing_rules_offset = offset; offset += qed_dump_num_param(dump_buf + offset, dump, "num_rules", 0); - while (input_offset < - s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES].size_in_dwords) { + while (input_offset < BYTES_TO_DWORDS(dbg_buf->size)) { const struct dbg_idle_chk_cond_hdr *cond_hdr = - (const struct dbg_idle_chk_cond_hdr *) - &s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES].ptr - [input_offset++]; + (const struct dbg_idle_chk_cond_hdr *)dbg_buf->ptr + + input_offset++; bool eval_mode, mode_match = true; u32 curr_failing_rules; u16 modes_buf_offset; @@ -4499,16 +3853,21 @@ static u32 qed_idle_chk_dump(struct qed_hwfn *p_hwfn, } if (mode_match) { + const struct dbg_idle_chk_rule *rule = + (const struct dbg_idle_chk_rule *)((u32 *) + dbg_buf->ptr + + input_offset); + u32 num_input_rules = + cond_hdr->data_size / IDLE_CHK_RULE_SIZE_DWORDS; offset += qed_idle_chk_dump_rule_entries(p_hwfn, - p_ptt, - dump_buf + offset, - dump, - (const struct dbg_idle_chk_rule *) - &s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES]. - ptr[input_offset], - cond_hdr->data_size / IDLE_CHK_RULE_SIZE_DWORDS, - &curr_failing_rules); + p_ptt, + dump_buf + + offset, + dump, + rule, + num_input_rules, + &curr_failing_rules); num_failing_rules += curr_failing_rules; } @@ -4575,7 +3934,7 @@ static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn, { u32 ret_mcp_resp, ret_mcp_param, ret_read_size, bytes_to_copy; s32 bytes_left = nvram_size_bytes; - u32 read_offset = 0; + u32 read_offset = 0, param = 0; DP_VERBOSE(p_hwfn, QED_MSG_DEBUG, @@ -4588,14 +3947,14 @@ static enum dbg_status qed_nvram_read(struct qed_hwfn *p_hwfn, MCP_DRV_NVM_BUF_LEN) ? MCP_DRV_NVM_BUF_LEN : bytes_left; /* Call NVRAM read command */ + SET_MFW_FIELD(param, + DRV_MB_PARAM_NVM_OFFSET, + nvram_offset_bytes + read_offset); + SET_MFW_FIELD(param, DRV_MB_PARAM_NVM_LEN, bytes_to_copy); if (qed_mcp_nvm_rd_cmd(p_hwfn, p_ptt, - DRV_MSG_CODE_NVM_READ_NVRAM, - (nvram_offset_bytes + - read_offset) | - (bytes_to_copy << - DRV_MB_PARAM_NVM_LEN_OFFSET), - &ret_mcp_resp, &ret_mcp_param, - &ret_read_size, + DRV_MSG_CODE_NVM_READ_NVRAM, param, + &ret_mcp_resp, + &ret_mcp_param, &ret_read_size, (u32 *)((u8 *)ret_buf + read_offset))) return DBG_STATUS_NVRAM_READ_FAILED; @@ -4733,12 +4092,12 @@ static enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn, u32 trace_meta_size_dwords = 0, running_bundle_id, offset = 0; u32 trace_meta_offset_bytes = 0, trace_meta_size_bytes = 0; enum dbg_status status; - bool mcp_access; int halted = 0; + bool use_mfw; *num_dumped_dwords = 0; - mcp_access = !qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_NO_MCP); + use_mfw = !qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_NO_MCP); /* Get trace data info */ status = qed_mcp_trace_get_data_info(p_hwfn, @@ -4759,7 +4118,7 @@ static enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn, * consistent. if halt fails, MCP trace is taken anyway, with a small * risk that it may be corrupt. */ - if (dump && mcp_access) { + if (dump && use_mfw) { halted = !qed_mcp_halt(p_hwfn, p_ptt); if (!halted) DP_NOTICE(p_hwfn, "MCP halt failed!\n"); @@ -4799,17 +4158,15 @@ static enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn, */ trace_meta_size_bytes = qed_grc_get_param(p_hwfn, DBG_GRC_PARAM_MCP_TRACE_META_SIZE); - if ((!trace_meta_size_bytes || dump) && mcp_access) { + if ((!trace_meta_size_bytes || dump) && use_mfw) status = qed_mcp_trace_get_meta_info(p_hwfn, p_ptt, trace_data_size_bytes, &running_bundle_id, &trace_meta_offset_bytes, &trace_meta_size_bytes); - if (status == DBG_STATUS_OK) - trace_meta_size_dwords = - BYTES_TO_DWORDS(trace_meta_size_bytes); - } + if (status == DBG_STATUS_OK) + trace_meta_size_dwords = BYTES_TO_DWORDS(trace_meta_size_bytes); /* Dump trace meta size param */ offset += qed_dump_num_param(dump_buf + offset, @@ -4833,7 +4190,7 @@ static enum dbg_status qed_mcp_trace_dump(struct qed_hwfn *p_hwfn, /* If no mcp access, indicate that the dump doesn't contain the meta * data from NVRAM. */ - return mcp_access ? status : DBG_STATUS_NVRAM_GET_IMAGE_FAILED; + return use_mfw ? status : DBG_STATUS_NVRAM_GET_IMAGE_FAILED; } /* Dump GRC FIFO */ @@ -5058,7 +4415,7 @@ static u32 qed_fw_asserts_dump(struct qed_hwfn *p_hwfn, struct storm_defs *storm = &s_storm_defs[storm_id]; u32 last_list_idx, addr; - if (dev_data->block_in_reset[storm->block_id]) + if (dev_data->block_in_reset[storm->sem_block_id]) continue; /* Read FW info for the current Storm */ @@ -5453,18 +4810,18 @@ static u32 qed_ilt_dump(struct qed_hwfn *p_hwfn, /***************************** Public Functions *******************************/ -enum dbg_status qed_dbg_set_bin_ptr(const u8 * const bin_ptr) +enum dbg_status qed_dbg_set_bin_ptr(struct qed_hwfn *p_hwfn, + const u8 * const bin_ptr) { - struct bin_buffer_hdr *buf_array = (struct bin_buffer_hdr *)bin_ptr; + struct bin_buffer_hdr *buf_hdrs = (struct bin_buffer_hdr *)bin_ptr; u8 buf_id; - /* convert binary data to debug arrays */ - for (buf_id = 0; buf_id < MAX_BIN_DBG_BUFFER_TYPE; buf_id++) { - s_dbg_arrays[buf_id].ptr = - (u32 *)(bin_ptr + buf_array[buf_id].offset); - s_dbg_arrays[buf_id].size_in_dwords = - BYTES_TO_DWORDS(buf_array[buf_id].length); - } + /* Convert binary data to debug arrays */ + for (buf_id = 0; buf_id < MAX_BIN_DBG_BUFFER_TYPE; buf_id++) + qed_set_dbg_bin_buf(p_hwfn, + buf_id, + (u32 *)(bin_ptr + buf_hdrs[buf_id].offset), + buf_hdrs[buf_id].length); return DBG_STATUS_OK; } @@ -5479,7 +4836,7 @@ bool qed_read_fw_info(struct qed_hwfn *p_hwfn, struct storm_defs *storm = &s_storm_defs[storm_id]; /* Skip Storm if it's in reset */ - if (dev_data->block_in_reset[storm->block_id]) + if (dev_data->block_in_reset[storm->sem_block_id]) continue; /* Read FW info for the current Storm */ @@ -5492,16 +4849,17 @@ bool qed_read_fw_info(struct qed_hwfn *p_hwfn, } enum dbg_status qed_dbg_grc_config(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, enum dbg_grc_params grc_param, u32 val) { + struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; enum dbg_status status; int i; - DP_VERBOSE(p_hwfn, QED_MSG_DEBUG, + DP_VERBOSE(p_hwfn, + QED_MSG_DEBUG, "dbg_grc_config: paramId = %d, val = %d\n", grc_param, val); - status = qed_dbg_dev_init(p_hwfn, p_ptt); + status = qed_dbg_dev_init(p_hwfn); if (status != DBG_STATUS_OK) return status; @@ -5527,24 +4885,23 @@ enum dbg_status qed_dbg_grc_config(struct qed_hwfn *p_hwfn, /* Update all params with the preset values */ for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) { + struct grc_param_defs *defs = &s_grc_param_defs[i]; u32 preset_val; - /* Skip persistent params */ - if (s_grc_param_defs[i].is_persistent) + if (defs->is_persistent) continue; /* Find preset value */ if (grc_param == DBG_GRC_PARAM_EXCLUDE_ALL) preset_val = - s_grc_param_defs[i].exclude_all_preset_val; + defs->exclude_all_preset_val; else if (grc_param == DBG_GRC_PARAM_CRASH) preset_val = - s_grc_param_defs[i].crash_preset_val; + defs->crash_preset_val[dev_data->chip_id]; else return DBG_STATUS_INVALID_ARGS; - qed_grc_set_param(p_hwfn, - (enum dbg_grc_params)i, preset_val); + qed_grc_set_param(p_hwfn, i, preset_val); } } else { /* Regular param - set its value */ @@ -5570,18 +4927,18 @@ enum dbg_status qed_dbg_grc_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; if (status != DBG_STATUS_OK) return status; - if (!s_dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || - !s_dbg_arrays[BIN_BUF_DBG_DUMP_REG].ptr || - !s_dbg_arrays[BIN_BUF_DBG_DUMP_MEM].ptr || - !s_dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr || - !s_dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr) + if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_DUMP_REG].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_DUMP_MEM].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr) return DBG_STATUS_DBG_ARRAY_NOT_SET; return qed_grc_dump(p_hwfn, p_ptt, NULL, false, buf_size); @@ -5621,20 +4978,19 @@ enum dbg_status qed_dbg_idle_chk_get_dump_buf_size(struct qed_hwfn *p_hwfn, u32 *buf_size) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - struct idle_chk_data *idle_chk; + struct idle_chk_data *idle_chk = &dev_data->idle_chk; enum dbg_status status; - idle_chk = &dev_data->idle_chk; *buf_size = 0; - status = qed_dbg_dev_init(p_hwfn, p_ptt); + status = qed_dbg_dev_init(p_hwfn); if (status != DBG_STATUS_OK) return status; - if (!s_dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || - !s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr || - !s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_IMMS].ptr || - !s_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES].ptr) + if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_REGS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_IMMS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_RULES].ptr) return DBG_STATUS_DBG_ARRAY_NOT_SET; if (!idle_chk->buf_size_set) { @@ -5669,6 +5025,7 @@ enum dbg_status qed_dbg_idle_chk_dump(struct qed_hwfn *p_hwfn, return DBG_STATUS_DUMP_BUF_TOO_SMALL; /* Update reset state */ + qed_grc_unreset_blocks(p_hwfn, p_ptt, true); qed_update_blocks_reset_state(p_hwfn, p_ptt); /* Idle Check Dump */ @@ -5684,7 +5041,7 @@ enum dbg_status qed_dbg_mcp_trace_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5731,7 +5088,7 @@ enum dbg_status qed_dbg_reg_fifo_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5777,7 +5134,7 @@ enum dbg_status qed_dbg_igu_fifo_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5823,7 +5180,7 @@ qed_dbg_protection_override_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5873,7 +5230,7 @@ enum dbg_status qed_dbg_fw_asserts_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5921,7 +5278,7 @@ static enum dbg_status qed_dbg_ilt_get_dump_buf_size(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, u32 *buf_size) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); *buf_size = 0; @@ -5968,19 +5325,20 @@ enum dbg_status qed_dbg_read_attn(struct qed_hwfn *p_hwfn, bool clear_status, struct dbg_attn_block_result *results) { - enum dbg_status status = qed_dbg_dev_init(p_hwfn, p_ptt); + enum dbg_status status = qed_dbg_dev_init(p_hwfn); u8 reg_idx, num_attn_regs, num_result_regs = 0; const struct dbg_attn_reg *attn_reg_arr; if (status != DBG_STATUS_OK) return status; - if (!s_dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || - !s_dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr || - !s_dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr) + if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_MODE_TREE].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_BLOCKS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_REGS].ptr) return DBG_STATUS_DBG_ARRAY_NOT_SET; - attn_reg_arr = qed_get_block_attn_regs(block_id, + attn_reg_arr = qed_get_block_attn_regs(p_hwfn, + block_id, attn_type, &num_attn_regs); for (reg_idx = 0; reg_idx < num_attn_regs; reg_idx++) { @@ -6025,7 +5383,7 @@ enum dbg_status qed_dbg_read_attn(struct qed_hwfn *p_hwfn, results->block_id = (u8)block_id; results->names_offset = - qed_get_block_attn_data(block_id, attn_type)->names_offset; + qed_get_block_attn_data(p_hwfn, block_id, attn_type)->names_offset; SET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE, attn_type); SET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_NUM_REGS, num_result_regs); @@ -6035,11 +5393,6 @@ enum dbg_status qed_dbg_read_attn(struct qed_hwfn *p_hwfn, /******************************* Data Types **********************************/ -struct block_info { - const char *name; - enum block_id id; -}; - /* REG fifo element */ struct reg_fifo_element { u64 data; @@ -6063,6 +5416,12 @@ struct reg_fifo_element { #define REG_FIFO_ELEMENT_ERROR_MASK 0x1f }; +/* REG fifo error element */ +struct reg_fifo_err { + u32 err_code; + const char *err_msg; +}; + /* IGU fifo element */ struct igu_fifo_element { u32 dword0; @@ -6162,20 +5521,6 @@ struct igu_fifo_addr_data { enum igu_fifo_addr_types type; }; -struct mcp_trace_meta { - u32 modules_num; - char **modules; - u32 formats_num; - struct mcp_trace_format *formats; - bool is_allocated; -}; - -/* Debug Tools user data */ -struct dbg_tools_user_data { - struct mcp_trace_meta mcp_trace_meta; - const u32 *mcp_trace_user_meta_buf; -}; - /******************************** Constants **********************************/ #define MAX_MSG_LEN 1024 @@ -6183,7 +5528,7 @@ struct dbg_tools_user_data { #define MCP_TRACE_MAX_MODULE_LEN 8 #define MCP_TRACE_FORMAT_MAX_PARAMS 3 #define MCP_TRACE_FORMAT_PARAM_WIDTH \ - (MCP_TRACE_FORMAT_P2_SIZE_SHIFT - MCP_TRACE_FORMAT_P1_SIZE_SHIFT) + (MCP_TRACE_FORMAT_P2_SIZE_OFFSET - MCP_TRACE_FORMAT_P1_SIZE_OFFSET) #define REG_FIFO_ELEMENT_ADDR_FACTOR 4 #define REG_FIFO_ELEMENT_IS_PF_VF_VAL 127 @@ -6192,107 +5537,6 @@ struct dbg_tools_user_data { /***************************** Constant Arrays *******************************/ -struct user_dbg_array { - const u32 *ptr; - u32 size_in_dwords; -}; - -/* Debug arrays */ -static struct user_dbg_array -s_user_dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE] = { {NULL} }; - -/* Block names array */ -static struct block_info s_block_info_arr[] = { - {"grc", BLOCK_GRC}, - {"miscs", BLOCK_MISCS}, - {"misc", BLOCK_MISC}, - {"dbu", BLOCK_DBU}, - {"pglue_b", BLOCK_PGLUE_B}, - {"cnig", BLOCK_CNIG}, - {"cpmu", BLOCK_CPMU}, - {"ncsi", BLOCK_NCSI}, - {"opte", BLOCK_OPTE}, - {"bmb", BLOCK_BMB}, - {"pcie", BLOCK_PCIE}, - {"mcp", BLOCK_MCP}, - {"mcp2", BLOCK_MCP2}, - {"pswhst", BLOCK_PSWHST}, - {"pswhst2", BLOCK_PSWHST2}, - {"pswrd", BLOCK_PSWRD}, - {"pswrd2", BLOCK_PSWRD2}, - {"pswwr", BLOCK_PSWWR}, - {"pswwr2", BLOCK_PSWWR2}, - {"pswrq", BLOCK_PSWRQ}, - {"pswrq2", BLOCK_PSWRQ2}, - {"pglcs", BLOCK_PGLCS}, - {"ptu", BLOCK_PTU}, - {"dmae", BLOCK_DMAE}, - {"tcm", BLOCK_TCM}, - {"mcm", BLOCK_MCM}, - {"ucm", BLOCK_UCM}, - {"xcm", BLOCK_XCM}, - {"ycm", BLOCK_YCM}, - {"pcm", BLOCK_PCM}, - {"qm", BLOCK_QM}, - {"tm", BLOCK_TM}, - {"dorq", BLOCK_DORQ}, - {"brb", BLOCK_BRB}, - {"src", BLOCK_SRC}, - {"prs", BLOCK_PRS}, - {"tsdm", BLOCK_TSDM}, - {"msdm", BLOCK_MSDM}, - {"usdm", BLOCK_USDM}, - {"xsdm", BLOCK_XSDM}, - {"ysdm", BLOCK_YSDM}, - {"psdm", BLOCK_PSDM}, - {"tsem", BLOCK_TSEM}, - {"msem", BLOCK_MSEM}, - {"usem", BLOCK_USEM}, - {"xsem", BLOCK_XSEM}, - {"ysem", BLOCK_YSEM}, - {"psem", BLOCK_PSEM}, - {"rss", BLOCK_RSS}, - {"tmld", BLOCK_TMLD}, - {"muld", BLOCK_MULD}, - {"yuld", BLOCK_YULD}, - {"xyld", BLOCK_XYLD}, - {"ptld", BLOCK_PTLD}, - {"ypld", BLOCK_YPLD}, - {"prm", BLOCK_PRM}, - {"pbf_pb1", BLOCK_PBF_PB1}, - {"pbf_pb2", BLOCK_PBF_PB2}, - {"rpb", BLOCK_RPB}, - {"btb", BLOCK_BTB}, - {"pbf", BLOCK_PBF}, - {"rdif", BLOCK_RDIF}, - {"tdif", BLOCK_TDIF}, - {"cdu", BLOCK_CDU}, - {"ccfc", BLOCK_CCFC}, - {"tcfc", BLOCK_TCFC}, - {"igu", BLOCK_IGU}, - {"cau", BLOCK_CAU}, - {"rgfs", BLOCK_RGFS}, - {"rgsrc", BLOCK_RGSRC}, - {"tgfs", BLOCK_TGFS}, - {"tgsrc", BLOCK_TGSRC}, - {"umac", BLOCK_UMAC}, - {"xmac", BLOCK_XMAC}, - {"dbg", BLOCK_DBG}, - {"nig", BLOCK_NIG}, - {"wol", BLOCK_WOL}, - {"bmbn", BLOCK_BMBN}, - {"ipc", BLOCK_IPC}, - {"nwm", BLOCK_NWM}, - {"nws", BLOCK_NWS}, - {"ms", BLOCK_MS}, - {"phy_pcie", BLOCK_PHY_PCIE}, - {"led", BLOCK_LED}, - {"avs_wrap", BLOCK_AVS_WRAP}, - {"pxpreqbus", BLOCK_PXPREQBUS}, - {"misc_aeu", BLOCK_MISC_AEU}, - {"bar0_map", BLOCK_BAR0_MAP} -}; - /* Status string array */ static const char * const s_status_str[] = { /* DBG_STATUS_OK */ @@ -6322,14 +5566,12 @@ static const char * const s_status_str[] = { /* DBG_STATUS_PCI_BUF_NOT_ALLOCATED */ "A PCI buffer wasn't allocated", - /* DBG_STATUS_TOO_MANY_INPUTS */ - "Too many inputs were enabled. Enabled less inputs, or set 'unifyInputs' to true", + /* DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS */ + "The filter/trigger constraint dword offsets are not enabled for recording", - /* DBG_STATUS_INPUT_OVERLAP */ - "Overlapping debug bus inputs", - /* DBG_STATUS_HW_ONLY_RECORDING */ - "Cannot record Storm data since the entire recording cycle is used by HW", + /* DBG_STATUS_VFC_READ_ERROR */ + "Error reading from VFC", /* DBG_STATUS_STORM_ALREADY_ENABLED */ "The Storm was already enabled", @@ -6346,8 +5588,8 @@ static const char * const s_status_str[] = { /* DBG_STATUS_NO_INPUT_ENABLED */ "No input was enabled for recording", - /* DBG_STATUS_NO_FILTER_TRIGGER_64B */ - "Filters and triggers are not allowed when recording in 64b units", + /* DBG_STATUS_NO_FILTER_TRIGGER_256B */ + "Filters and triggers are not allowed in E4 256-bit mode", /* DBG_STATUS_FILTER_ALREADY_ENABLED */ "The filter was already enabled", @@ -6421,8 +5663,8 @@ static const char * const s_status_str[] = { /* DBG_STATUS_MCP_COULD_NOT_RESUME */ "Failed to resume MCP after halt", - /* DBG_STATUS_RESERVED2 */ - "Reserved debug status - shouldn't be returned", + /* DBG_STATUS_RESERVED0 */ + "", /* DBG_STATUS_SEMI_FIFO_NOT_EMPTY */ "Failed to empty SEMI sync FIFO", @@ -6445,17 +5687,32 @@ static const char * const s_status_str[] = { /* DBG_STATUS_DBG_ARRAY_NOT_SET */ "Debug arrays were not set (when using binary files, dbg_set_bin_ptr must be called)", - /* DBG_STATUS_FILTER_BUG */ - "Debug Bus filtering requires the -unifyInputs option (due to a HW bug)", + /* DBG_STATUS_RESERVED1 */ + "", /* DBG_STATUS_NON_MATCHING_LINES */ - "Non-matching debug lines - all lines must be of the same type (either 128b or 256b)", + "Non-matching debug lines - in E4, all lines must be of the same type (either 128b or 256b)", - /* DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET */ - "The selected trigger dword offset wasn't enabled in the recorded HW block", + /* DBG_STATUS_INSUFFICIENT_HW_IDS */ + "Insufficient HW IDs. Try to record less Storms/blocks", /* DBG_STATUS_DBG_BUS_IN_USE */ - "The debug bus is in use" + "The debug bus is in use", + + /* DBG_STATUS_INVALID_STORM_DBG_MODE */ + "The storm debug mode is not supported in the current chip", + + /* DBG_STATUS_OTHER_ENGINE_BB_ONLY */ + "Other engine is supported only in BB", + + /* DBG_STATUS_FILTER_SINGLE_HW_ID */ + "The configured filter mode requires a single Storm/block input", + + /* DBG_STATUS_TRIGGER_SINGLE_HW_ID */ + "The configured filter mode requires that all the constraints of a single trigger state will be defined on a single Storm/block input", + + /* DBG_STATUS_MISSING_TRIGGER_STATE_STORM */ + "When triggering on Storm data, the Storm to trigger on must be specified" }; /* Idle check severity names array */ @@ -6511,7 +5768,7 @@ static const char * const s_master_strs[] = { "xsdm", "dbu", "dmae", - "???", + "jdap", "???", "???", "???", @@ -6519,12 +5776,13 @@ static const char * const s_master_strs[] = { }; /* REG FIFO error messages array */ -static const char * const s_reg_fifo_error_strs[] = { - "grc timeout", - "address doesn't belong to any block", - "reserved address in block or write to read-only address", - "privilege/protection mismatch", - "path isolation error" +static struct reg_fifo_err s_reg_fifo_errors[] = { + {1, "grc timeout"}, + {2, "address doesn't belong to any block"}, + {4, "reserved address in block or write to read-only address"}, + {8, "privilege/protection mismatch"}, + {16, "path isolation error"}, + {17, "RSL error"} }; /* IGU FIFO sources array */ @@ -6764,8 +6022,21 @@ static u32 qed_print_section_params(u32 *dump_buf, return dump_offset; } -static struct dbg_tools_user_data * -qed_dbg_get_user_data(struct qed_hwfn *p_hwfn) +/* Returns the block name that matches the specified block ID, + * or NULL if not found. + */ +static const char *qed_dbg_get_block_name(struct qed_hwfn *p_hwfn, + enum block_id block_id) +{ + const struct dbg_block_user *block = + (const struct dbg_block_user *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_BLOCKS_USER_DATA].ptr + block_id; + + return (const char *)block->name; +} + +static struct dbg_tools_user_data *qed_dbg_get_user_data(struct qed_hwfn + *p_hwfn) { return (struct dbg_tools_user_data *)p_hwfn->dbg_user_info; } @@ -6773,7 +6044,8 @@ qed_dbg_get_user_data(struct qed_hwfn *p_hwfn) /* Parses the idle check rules and returns the number of characters printed. * In case of parsing error, returns 0. */ -static u32 qed_parse_idle_chk_dump_rules(u32 *dump_buf, +static u32 qed_parse_idle_chk_dump_rules(struct qed_hwfn *p_hwfn, + u32 *dump_buf, u32 *dump_buf_end, u32 num_rules, bool print_fw_idle_chk, @@ -6801,19 +6073,18 @@ static u32 qed_parse_idle_chk_dump_rules(u32 *dump_buf, hdr = (struct dbg_idle_chk_result_hdr *)dump_buf; rule_parsing_data = - (const struct dbg_idle_chk_rule_parsing_data *) - &s_user_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_PARSING_DATA]. - ptr[hdr->rule_id]; + (const struct dbg_idle_chk_rule_parsing_data *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_PARSING_DATA].ptr + + hdr->rule_id; parsing_str_offset = - GET_FIELD(rule_parsing_data->data, - DBG_IDLE_CHK_RULE_PARSING_DATA_STR_OFFSET); + GET_FIELD(rule_parsing_data->data, + DBG_IDLE_CHK_RULE_PARSING_DATA_STR_OFFSET); has_fw_msg = - GET_FIELD(rule_parsing_data->data, - DBG_IDLE_CHK_RULE_PARSING_DATA_HAS_FW_MSG) > 0; - parsing_str = - &((const char *) - s_user_dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr) - [parsing_str_offset]; + GET_FIELD(rule_parsing_data->data, + DBG_IDLE_CHK_RULE_PARSING_DATA_HAS_FW_MSG) > 0; + parsing_str = (const char *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr + + parsing_str_offset; lsi_msg = parsing_str; curr_reg_id = 0; @@ -6917,7 +6188,8 @@ static u32 qed_parse_idle_chk_dump_rules(u32 *dump_buf, * parsed_results_bytes. * The parsing status is returned. */ -static enum dbg_status qed_parse_idle_chk_dump(u32 *dump_buf, +static enum dbg_status qed_parse_idle_chk_dump(struct qed_hwfn *p_hwfn, + u32 *dump_buf, u32 num_dumped_dwords, char *results_buf, u32 *parsed_results_bytes, @@ -6935,8 +6207,8 @@ static enum dbg_status qed_parse_idle_chk_dump(u32 *dump_buf, *num_errors = 0; *num_warnings = 0; - if (!s_user_dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr || - !s_user_dbg_arrays[BIN_BUF_DBG_IDLE_CHK_PARSING_DATA].ptr) + if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_IDLE_CHK_PARSING_DATA].ptr) return DBG_STATUS_DBG_ARRAY_NOT_SET; /* Read global_params section */ @@ -6969,7 +6241,8 @@ static enum dbg_status qed_parse_idle_chk_dump(u32 *dump_buf, results_offset), "FW_IDLE_CHECK:\n"); rules_print_size = - qed_parse_idle_chk_dump_rules(dump_buf, + qed_parse_idle_chk_dump_rules(p_hwfn, + dump_buf, dump_buf_end, num_rules, true, @@ -6989,7 +6262,8 @@ static enum dbg_status qed_parse_idle_chk_dump(u32 *dump_buf, results_offset), "\nLSI_IDLE_CHECK:\n"); rules_print_size = - qed_parse_idle_chk_dump_rules(dump_buf, + qed_parse_idle_chk_dump_rules(p_hwfn, + dump_buf, dump_buf_end, num_rules, false, @@ -7101,9 +6375,8 @@ qed_mcp_trace_alloc_meta_data(struct qed_hwfn *p_hwfn, format_ptr->data = qed_read_dword_from_buf(meta_buf_bytes, &offset); - format_len = - (format_ptr->data & - MCP_TRACE_FORMAT_LEN_MASK) >> MCP_TRACE_FORMAT_LEN_SHIFT; + format_len = GET_MFW_FIELD(format_ptr->data, + MCP_TRACE_FORMAT_LEN); format_ptr->format_str = kzalloc(format_len, GFP_KERNEL); if (!format_ptr->format_str) { /* Update number of modules to be released */ @@ -7126,7 +6399,7 @@ qed_mcp_trace_alloc_meta_data(struct qed_hwfn *p_hwfn, * trace_buf - MCP trace cyclic buffer * trace_buf_size - MCP trace cyclic buffer size in bytes * data_offset - offset in bytes of the data to parse in the MCP trace cyclic - * buffer. + * buffer. * data_size - size in bytes of data to parse. * parsed_buf - destination buffer for parsed data. * parsed_results_bytes - size of parsed data in bytes. @@ -7171,9 +6444,8 @@ static enum dbg_status qed_parse_mcp_trace_buf(struct qed_hwfn *p_hwfn, /* Skip message if its index doesn't exist in the meta data */ if (format_idx >= meta->formats_num) { - u8 format_size = - (u8)((header & MFW_TRACE_PRM_SIZE_MASK) >> - MFW_TRACE_PRM_SIZE_SHIFT); + u8 format_size = (u8)GET_MFW_FIELD(header, + MFW_TRACE_PRM_SIZE); if (data_size < format_size) return DBG_STATUS_MCP_TRACE_BAD_DATA; @@ -7188,11 +6460,10 @@ static enum dbg_status qed_parse_mcp_trace_buf(struct qed_hwfn *p_hwfn, format_ptr = &meta->formats[format_idx]; for (i = 0, - param_mask = MCP_TRACE_FORMAT_P1_SIZE_MASK, - param_shift = MCP_TRACE_FORMAT_P1_SIZE_SHIFT; + param_mask = MCP_TRACE_FORMAT_P1_SIZE_MASK, param_shift = + MCP_TRACE_FORMAT_P1_SIZE_OFFSET; i < MCP_TRACE_FORMAT_MAX_PARAMS; - i++, - param_mask <<= MCP_TRACE_FORMAT_PARAM_WIDTH, + i++, param_mask <<= MCP_TRACE_FORMAT_PARAM_WIDTH, param_shift += MCP_TRACE_FORMAT_PARAM_WIDTH) { /* Extract param size (0..3) */ u8 param_size = (u8)((format_ptr->data & param_mask) >> @@ -7220,12 +6491,10 @@ static enum dbg_status qed_parse_mcp_trace_buf(struct qed_hwfn *p_hwfn, data_size -= param_size; } - format_level = (u8)((format_ptr->data & - MCP_TRACE_FORMAT_LEVEL_MASK) >> - MCP_TRACE_FORMAT_LEVEL_SHIFT); - format_module = (u8)((format_ptr->data & - MCP_TRACE_FORMAT_MODULE_MASK) >> - MCP_TRACE_FORMAT_MODULE_SHIFT); + format_level = (u8)GET_MFW_FIELD(format_ptr->data, + MCP_TRACE_FORMAT_LEVEL); + format_module = (u8)GET_MFW_FIELD(format_ptr->data, + MCP_TRACE_FORMAT_MODULE); if (format_level >= ARRAY_SIZE(s_mcp_trace_level_str)) return DBG_STATUS_MCP_TRACE_BAD_DATA; @@ -7367,7 +6636,7 @@ static enum dbg_status qed_parse_reg_fifo_dump(u32 *dump_buf, const char *section_name, *param_name, *param_str_val; u32 param_num_val, num_section_params, num_elements; struct reg_fifo_element *elements; - u8 i, j, err_val, vf_val; + u8 i, j, err_code, vf_val; u32 results_offset = 0; char vf_str[4]; @@ -7398,7 +6667,7 @@ static enum dbg_status qed_parse_reg_fifo_dump(u32 *dump_buf, /* Decode elements */ for (i = 0; i < num_elements; i++) { - bool err_printed = false; + const char *err_msg = NULL; /* Discover if element belongs to a VF or a PF */ vf_val = GET_FIELD(elements[i].data, REG_FIFO_ELEMENT_VF); @@ -7407,11 +6676,17 @@ static enum dbg_status qed_parse_reg_fifo_dump(u32 *dump_buf, else sprintf(vf_str, "%d", vf_val); + /* Find error message */ + err_code = GET_FIELD(elements[i].data, REG_FIFO_ELEMENT_ERROR); + for (j = 0; j < ARRAY_SIZE(s_reg_fifo_errors) && !err_msg; j++) + if (err_code == s_reg_fifo_errors[j].err_code) + err_msg = s_reg_fifo_errors[j].err_msg; + /* Add parsed element to parsed buffer */ results_offset += sprintf(qed_get_buf_ptr(results_buf, results_offset), - "raw: 0x%016llx, address: 0x%07x, access: %-5s, pf: %2d, vf: %s, port: %d, privilege: %-3s, protection: %-12s, master: %-4s, errors: ", + "raw: 0x%016llx, address: 0x%07x, access: %-5s, pf: %2d, vf: %s, port: %d, privilege: %-3s, protection: %-12s, master: %-4s, error: %s\n", elements[i].data, (u32)GET_FIELD(elements[i].data, REG_FIFO_ELEMENT_ADDRESS) * @@ -7428,30 +6703,8 @@ static enum dbg_status qed_parse_reg_fifo_dump(u32 *dump_buf, s_protection_strs[GET_FIELD(elements[i].data, REG_FIFO_ELEMENT_PROTECTION)], s_master_strs[GET_FIELD(elements[i].data, - REG_FIFO_ELEMENT_MASTER)]); - - /* Print errors */ - for (j = 0, - err_val = GET_FIELD(elements[i].data, - REG_FIFO_ELEMENT_ERROR); - j < ARRAY_SIZE(s_reg_fifo_error_strs); - j++, err_val >>= 1) { - if (err_val & 0x1) { - if (err_printed) - results_offset += - sprintf(qed_get_buf_ptr - (results_buf, - results_offset), ", "); - results_offset += - sprintf(qed_get_buf_ptr - (results_buf, results_offset), "%s", - s_reg_fifo_error_strs[j]); - err_printed = true; - } - } - - results_offset += - sprintf(qed_get_buf_ptr(results_buf, results_offset), "\n"); + REG_FIFO_ELEMENT_MASTER)], + err_msg ? err_msg : "unknown error code"); } results_offset += sprintf(qed_get_buf_ptr(results_buf, @@ -7805,27 +7058,28 @@ static enum dbg_status qed_parse_fw_asserts_dump(u32 *dump_buf, /***************************** Public Functions *******************************/ -enum dbg_status qed_dbg_user_set_bin_ptr(const u8 * const bin_ptr) +enum dbg_status qed_dbg_user_set_bin_ptr(struct qed_hwfn *p_hwfn, + const u8 * const bin_ptr) { - struct bin_buffer_hdr *buf_array = (struct bin_buffer_hdr *)bin_ptr; + struct bin_buffer_hdr *buf_hdrs = (struct bin_buffer_hdr *)bin_ptr; u8 buf_id; /* Convert binary data to debug arrays */ - for (buf_id = 0; buf_id < MAX_BIN_DBG_BUFFER_TYPE; buf_id++) { - s_user_dbg_arrays[buf_id].ptr = - (u32 *)(bin_ptr + buf_array[buf_id].offset); - s_user_dbg_arrays[buf_id].size_in_dwords = - BYTES_TO_DWORDS(buf_array[buf_id].length); - } + for (buf_id = 0; buf_id < MAX_BIN_DBG_BUFFER_TYPE; buf_id++) + qed_set_dbg_bin_buf(p_hwfn, + (enum bin_dbg_buffer_type)buf_id, + (u32 *)(bin_ptr + buf_hdrs[buf_id].offset), + buf_hdrs[buf_id].length); return DBG_STATUS_OK; } -enum dbg_status qed_dbg_alloc_user_data(struct qed_hwfn *p_hwfn) +enum dbg_status qed_dbg_alloc_user_data(struct qed_hwfn *p_hwfn, + void **user_data_ptr) { - p_hwfn->dbg_user_info = kzalloc(sizeof(struct dbg_tools_user_data), - GFP_KERNEL); - if (!p_hwfn->dbg_user_info) + *user_data_ptr = kzalloc(sizeof(struct dbg_tools_user_data), + GFP_KERNEL); + if (!(*user_data_ptr)) return DBG_STATUS_VIRT_MEM_ALLOC_FAILED; return DBG_STATUS_OK; @@ -7844,7 +7098,8 @@ enum dbg_status qed_get_idle_chk_results_buf_size(struct qed_hwfn *p_hwfn, { u32 num_errors, num_warnings; - return qed_parse_idle_chk_dump(dump_buf, + return qed_parse_idle_chk_dump(p_hwfn, + dump_buf, num_dumped_dwords, NULL, results_buf_size, @@ -7860,7 +7115,8 @@ enum dbg_status qed_print_idle_chk_results(struct qed_hwfn *p_hwfn, { u32 parsed_buf_size; - return qed_parse_idle_chk_dump(dump_buf, + return qed_parse_idle_chk_dump(p_hwfn, + dump_buf, num_dumped_dwords, results_buf, &parsed_buf_size, @@ -8031,25 +7287,28 @@ enum dbg_status qed_print_fw_asserts_results(struct qed_hwfn *p_hwfn, enum dbg_status qed_dbg_parse_attn(struct qed_hwfn *p_hwfn, struct dbg_attn_block_result *results) { - struct user_dbg_array *block_attn, *pstrings; const u32 *block_attn_name_offsets; - enum dbg_attn_type attn_type; + const char *attn_name_base; const char *block_name; + enum dbg_attn_type attn_type; u8 num_regs, i, j; num_regs = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_NUM_REGS); - attn_type = (enum dbg_attn_type) - GET_FIELD(results->data, - DBG_ATTN_BLOCK_RESULT_ATTN_TYPE); - block_name = s_block_info_arr[results->block_id].name; - - if (!s_user_dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr || - !s_user_dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr || - !s_user_dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr) + attn_type = GET_FIELD(results->data, DBG_ATTN_BLOCK_RESULT_ATTN_TYPE); + block_name = qed_dbg_get_block_name(p_hwfn, results->block_id); + if (!block_name) + return DBG_STATUS_INVALID_ARGS; + + if (!p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr || + !p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr) return DBG_STATUS_DBG_ARRAY_NOT_SET; - block_attn = &s_user_dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS]; - block_attn_name_offsets = &block_attn->ptr[results->names_offset]; + block_attn_name_offsets = + (u32 *)p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_NAME_OFFSETS].ptr + + results->names_offset; + + attn_name_base = p_hwfn->dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS].ptr; /* Go over registers with a non-zero attention status */ for (i = 0; i < num_regs; i++) { @@ -8060,18 +7319,17 @@ enum dbg_status qed_dbg_parse_attn(struct qed_hwfn *p_hwfn, reg_result = &results->reg_results[i]; num_reg_attn = GET_FIELD(reg_result->data, DBG_ATTN_REG_RESULT_NUM_REG_ATTN); - block_attn = &s_user_dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES]; - bit_mapping = &((struct dbg_attn_bit_mapping *) - block_attn->ptr)[reg_result->block_attn_offset]; - - pstrings = &s_user_dbg_arrays[BIN_BUF_DBG_PARSING_STRINGS]; + bit_mapping = (struct dbg_attn_bit_mapping *) + p_hwfn->dbg_arrays[BIN_BUF_DBG_ATTN_INDEXES].ptr + + reg_result->block_attn_offset; /* Go over attention status bits */ - for (j = 0; j < num_reg_attn; j++) { + for (j = 0; j < num_reg_attn; j++, bit_idx++) { u16 attn_idx_val = GET_FIELD(bit_mapping[j].data, DBG_ATTN_BIT_MAPPING_VAL); const char *attn_name, *attn_type_str, *masked_str; - u32 attn_name_offset, sts_addr; + u32 attn_name_offset; + u32 sts_addr; /* Check if bit mask should be advanced (due to unused * bits). @@ -8083,18 +7341,19 @@ enum dbg_status qed_dbg_parse_attn(struct qed_hwfn *p_hwfn, } /* Check current bit index */ - if (!(reg_result->sts_val & BIT(bit_idx))) { - bit_idx++; + if (!(reg_result->sts_val & BIT(bit_idx))) continue; - } - /* Find attention name */ + /* An attention bit with value=1 was found + * Find attention name + */ attn_name_offset = block_attn_name_offsets[attn_idx_val]; - attn_name = &((const char *) - pstrings->ptr)[attn_name_offset]; - attn_type_str = attn_type == ATTN_TYPE_INTERRUPT ? - "Interrupt" : "Parity"; + attn_name = attn_name_base + attn_name_offset; + attn_type_str = + (attn_type == + ATTN_TYPE_INTERRUPT ? "Interrupt" : + "Parity"); masked_str = reg_result->mask_val & BIT(bit_idx) ? " [masked]" : ""; sts_addr = GET_FIELD(reg_result->data, @@ -8102,15 +7361,15 @@ enum dbg_status qed_dbg_parse_attn(struct qed_hwfn *p_hwfn, DP_NOTICE(p_hwfn, "%s (%s) : %s [address 0x%08x, bit %d]%s\n", block_name, attn_type_str, attn_name, - sts_addr, bit_idx, masked_str); - - bit_idx++; + sts_addr * 4, bit_idx, masked_str); } } return DBG_STATUS_OK; } +static DEFINE_MUTEX(qed_dbg_lock); + /* Wrapper for unifying the idle_chk and mcp_trace api */ static enum dbg_status qed_print_idle_chk_results_wrapper(struct qed_hwfn *p_hwfn, @@ -8287,6 +7546,17 @@ static enum dbg_status qed_dbg_dump(struct qed_hwfn *p_hwfn, &buf_size_dwords); if (rc != DBG_STATUS_OK && rc != DBG_STATUS_NVRAM_GET_IMAGE_FAILED) return rc; + + if (buf_size_dwords > MAX_DBG_FEATURE_SIZE_DWORDS) { + feature->buf_size = 0; + DP_NOTICE(p_hwfn->cdev, + "Debug feature [\"%s\"] size (0x%x dwords) exceeds maximum size (0x%x dwords)\n", + qed_features_lookup[feature_idx].name, + buf_size_dwords, MAX_DBG_FEATURE_SIZE_DWORDS); + + return DBG_STATUS_OK; + } + feature->buf_size = buf_size_dwords * sizeof(u32); feature->dump_buf = vmalloc(feature->buf_size); if (!feature->dump_buf) @@ -8487,15 +7757,23 @@ enum debug_print_features { ILT_DUMP = 13, }; -static u32 qed_calc_regdump_header(enum debug_print_features feature, +static u32 qed_calc_regdump_header(struct qed_dev *cdev, + enum debug_print_features feature, int engine, u32 feature_size, u8 omit_engine) { - /* Insert the engine, feature and mode inside the header and combine it - * with feature size. - */ - return feature_size | (feature << REGDUMP_HEADER_FEATURE_SHIFT) | - (omit_engine << REGDUMP_HEADER_OMIT_ENGINE_SHIFT) | - (engine << REGDUMP_HEADER_ENGINE_SHIFT); + u32 res = 0; + + SET_FIELD(res, REGDUMP_HEADER_SIZE, feature_size); + if (res != feature_size) + DP_NOTICE(cdev, + "Feature %d is too large (size 0x%x) and will corrupt the dump\n", + feature, feature_size); + + SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature); + SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine); + SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine); + + return res; } int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) @@ -8511,9 +7789,11 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) grc_params[i] = dev_data->grc.param_val[i]; - if (cdev->num_hwfns == 1) + if (!QED_IS_CMT(cdev)) omit_engine = 1; + mutex_lock(&qed_dbg_lock); + org_engine = qed_get_debug_engine(cdev); for (cur_engine = 0; cur_engine < cdev->num_hwfns; cur_engine++) { /* Collect idle_chks and grcDump for each hw function */ @@ -8526,7 +7806,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(IDLE_CHK, cur_engine, + qed_calc_regdump_header(cdev, IDLE_CHK, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8538,7 +7818,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(IDLE_CHK, cur_engine, + qed_calc_regdump_header(cdev, IDLE_CHK, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8550,7 +7830,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(REG_FIFO, cur_engine, + qed_calc_regdump_header(cdev, REG_FIFO, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8562,7 +7842,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(IGU_FIFO, cur_engine, + qed_calc_regdump_header(cdev, IGU_FIFO, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8575,7 +7855,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(PROTECTION_OVERRIDE, + qed_calc_regdump_header(cdev, PROTECTION_OVERRIDE, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); @@ -8590,8 +7870,9 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(FW_ASSERTS, cur_engine, - feature_size, omit_engine); + qed_calc_regdump_header(cdev, FW_ASSERTS, + cur_engine, feature_size, + omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { DP_ERR(cdev, "qed_dbg_fw_asserts failed. rc = %d\n", @@ -8605,7 +7886,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(ILT_DUMP, + qed_calc_regdump_header(cdev, ILT_DUMP, cur_engine, feature_size, omit_engine); @@ -8619,11 +7900,15 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) /* GRC dump - must be last because when mcp stuck it will * clutter idle_chk, reg_fifo, ... */ + for (i = 0; i < MAX_DBG_GRC_PARAMS; i++) + dev_data->grc.param_val[i] = grc_params[i]; + rc = qed_dbg_grc(cdev, (u8 *)buffer + offset + REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(GRC_DUMP, cur_engine, + qed_calc_regdump_header(cdev, GRC_DUMP, + cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8632,12 +7917,13 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) } qed_set_debug_engine(cdev, org_engine); + /* mcp_trace */ rc = qed_dbg_mcp_trace(cdev, (u8 *)buffer + offset + REGDUMP_HEADER_SIZE, &feature_size); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(MCP_TRACE, cur_engine, + qed_calc_regdump_header(cdev, MCP_TRACE, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else { @@ -8646,11 +7932,12 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) /* nvm cfg1 */ rc = qed_dbg_nvm_image(cdev, - (u8 *)buffer + offset + REGDUMP_HEADER_SIZE, - &feature_size, QED_NVM_IMAGE_NVM_CFG1); + (u8 *)buffer + offset + + REGDUMP_HEADER_SIZE, &feature_size, + QED_NVM_IMAGE_NVM_CFG1); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(NVM_CFG1, cur_engine, + qed_calc_regdump_header(cdev, NVM_CFG1, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else if (rc != -ENOENT) { @@ -8665,7 +7952,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) &feature_size, QED_NVM_IMAGE_DEFAULT_CFG); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(DEFAULT_CFG, cur_engine, + qed_calc_regdump_header(cdev, DEFAULT_CFG, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else if (rc != -ENOENT) { @@ -8681,8 +7968,8 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) &feature_size, QED_NVM_IMAGE_NVM_META); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(NVM_META, cur_engine, - feature_size, omit_engine); + qed_calc_regdump_header(cdev, NVM_META, cur_engine, + feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else if (rc != -ENOENT) { DP_ERR(cdev, @@ -8696,7 +7983,7 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) QED_NVM_IMAGE_MDUMP); if (!rc) { *(u32 *)((u8 *)buffer + offset) = - qed_calc_regdump_header(MDUMP, cur_engine, + qed_calc_regdump_header(cdev, MDUMP, cur_engine, feature_size, omit_engine); offset += (feature_size + REGDUMP_HEADER_SIZE); } else if (rc != -ENOENT) { @@ -8705,6 +7992,8 @@ int qed_dbg_all_data(struct qed_dev *cdev, void *buffer) QED_NVM_IMAGE_MDUMP, "QED_NVM_IMAGE_MDUMP", rc); } + mutex_unlock(&qed_dbg_lock); + return 0; } @@ -8715,6 +8004,7 @@ int qed_dbg_all_data_size(struct qed_dev *cdev) u32 regs_len = 0, image_len = 0, ilt_len = 0, total_ilt_len = 0; u8 cur_engine, org_engine; + cdev->disable_ilt_dump = false; org_engine = qed_get_debug_engine(cdev); for (cur_engine = 0; cur_engine < cdev->num_hwfns; cur_engine++) { /* Engine specific */ @@ -8806,9 +8096,8 @@ int qed_dbg_feature_size(struct qed_dev *cdev, enum qed_dbg_features feature) { struct qed_hwfn *p_hwfn = &cdev->hwfns[cdev->dbg_params.engine_for_debug]; + struct qed_dbg_feature *qed_feature = &cdev->dbg_features[feature]; struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); - struct qed_dbg_feature *qed_feature = - &cdev->dbg_params.features[feature]; u32 buf_size_dwords; enum dbg_status rc; @@ -8843,14 +8132,21 @@ void qed_set_debug_engine(struct qed_dev *cdev, int engine_number) void qed_dbg_pf_init(struct qed_dev *cdev) { - const u8 *dbg_values; + const u8 *dbg_values = NULL; + int i; /* Debug values are after init values. * The offset is the first dword of the file. */ dbg_values = cdev->firmware->data + *(u32 *)cdev->firmware->data; - qed_dbg_set_bin_ptr((u8 *)dbg_values); - qed_dbg_user_set_bin_ptr((u8 *)dbg_values); + + for_each_hwfn(cdev, i) { + qed_dbg_set_bin_ptr(&cdev->hwfns[i], dbg_values); + qed_dbg_user_set_bin_ptr(&cdev->hwfns[i], dbg_values); + } + + /* Set the hwfn to be 0 as default */ + cdev->dbg_params.engine_for_debug = 0; } void qed_dbg_pf_exit(struct qed_dev *cdev) @@ -8858,11 +8154,11 @@ void qed_dbg_pf_exit(struct qed_dev *cdev) struct qed_dbg_feature *feature = NULL; enum qed_dbg_features feature_idx; - /* Debug features' buffers may be allocated if debug feature was used - * but dump wasn't called. + /* debug features' buffers may be allocated if debug feature was used + * but dump wasn't called */ for (feature_idx = 0; feature_idx < DBG_FEATURE_NUM; feature_idx++) { - feature = &cdev->dbg_params.features[feature_idx]; + feature = &cdev->dbg_features[feature_idx]; if (feature->dump_buf) { vfree(feature->dump_buf); feature->dump_buf = NULL; diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 3fb73ce8c1d6..7912911337d4 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -2346,7 +2346,7 @@ int qed_resc_alloc(struct qed_dev *cdev) if (rc) goto alloc_err; - rc = qed_dbg_alloc_user_data(p_hwfn); + rc = qed_dbg_alloc_user_data(p_hwfn, &p_hwfn->dbg_user_info); if (rc) goto alloc_err; } diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index e7e3f7b31ee0..4597015b8bff 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -1860,98 +1860,6 @@ struct virt_mem_desc { /* Debug Tools HSI constants and macros */ /****************************************/ -enum block_addr { - GRCBASE_GRC = 0x50000, - GRCBASE_MISCS = 0x9000, - GRCBASE_MISC = 0x8000, - GRCBASE_DBU = 0xa000, - GRCBASE_PGLUE_B = 0x2a8000, - GRCBASE_CNIG = 0x218000, - GRCBASE_CPMU = 0x30000, - GRCBASE_NCSI = 0x40000, - GRCBASE_OPTE = 0x53000, - GRCBASE_BMB = 0x540000, - GRCBASE_PCIE = 0x54000, - GRCBASE_MCP = 0xe00000, - GRCBASE_MCP2 = 0x52000, - GRCBASE_PSWHST = 0x2a0000, - GRCBASE_PSWHST2 = 0x29e000, - GRCBASE_PSWRD = 0x29c000, - GRCBASE_PSWRD2 = 0x29d000, - GRCBASE_PSWWR = 0x29a000, - GRCBASE_PSWWR2 = 0x29b000, - GRCBASE_PSWRQ = 0x280000, - GRCBASE_PSWRQ2 = 0x240000, - GRCBASE_PGLCS = 0x0, - GRCBASE_DMAE = 0xc000, - GRCBASE_PTU = 0x560000, - GRCBASE_TCM = 0x1180000, - GRCBASE_MCM = 0x1200000, - GRCBASE_UCM = 0x1280000, - GRCBASE_XCM = 0x1000000, - GRCBASE_YCM = 0x1080000, - GRCBASE_PCM = 0x1100000, - GRCBASE_QM = 0x2f0000, - GRCBASE_TM = 0x2c0000, - GRCBASE_DORQ = 0x100000, - GRCBASE_BRB = 0x340000, - GRCBASE_SRC = 0x238000, - GRCBASE_PRS = 0x1f0000, - GRCBASE_TSDM = 0xfb0000, - GRCBASE_MSDM = 0xfc0000, - GRCBASE_USDM = 0xfd0000, - GRCBASE_XSDM = 0xf80000, - GRCBASE_YSDM = 0xf90000, - GRCBASE_PSDM = 0xfa0000, - GRCBASE_TSEM = 0x1700000, - GRCBASE_MSEM = 0x1800000, - GRCBASE_USEM = 0x1900000, - GRCBASE_XSEM = 0x1400000, - GRCBASE_YSEM = 0x1500000, - GRCBASE_PSEM = 0x1600000, - GRCBASE_RSS = 0x238800, - GRCBASE_TMLD = 0x4d0000, - GRCBASE_MULD = 0x4e0000, - GRCBASE_YULD = 0x4c8000, - GRCBASE_XYLD = 0x4c0000, - GRCBASE_PTLD = 0x5a0000, - GRCBASE_YPLD = 0x5c0000, - GRCBASE_PRM = 0x230000, - GRCBASE_PBF_PB1 = 0xda0000, - GRCBASE_PBF_PB2 = 0xda4000, - GRCBASE_RPB = 0x23c000, - GRCBASE_BTB = 0xdb0000, - GRCBASE_PBF = 0xd80000, - GRCBASE_RDIF = 0x300000, - GRCBASE_TDIF = 0x310000, - GRCBASE_CDU = 0x580000, - GRCBASE_CCFC = 0x2e0000, - GRCBASE_TCFC = 0x2d0000, - GRCBASE_IGU = 0x180000, - GRCBASE_CAU = 0x1c0000, - GRCBASE_RGFS = 0xf00000, - GRCBASE_RGSRC = 0x320000, - GRCBASE_TGFS = 0xd00000, - GRCBASE_TGSRC = 0x322000, - GRCBASE_UMAC = 0x51000, - GRCBASE_XMAC = 0x210000, - GRCBASE_DBG = 0x10000, - GRCBASE_NIG = 0x500000, - GRCBASE_WOL = 0x600000, - GRCBASE_BMBN = 0x610000, - GRCBASE_IPC = 0x20000, - GRCBASE_NWM = 0x800000, - GRCBASE_NWS = 0x700000, - GRCBASE_MS = 0x6a0000, - GRCBASE_PHY_PCIE = 0x620000, - GRCBASE_LED = 0x6b8000, - GRCBASE_AVS_WRAP = 0x6b0000, - GRCBASE_PXPREQBUS = 0x56000, - GRCBASE_MISC_AEU = 0x8000, - GRCBASE_BAR0_MAP = 0x1c00000, - MAX_BLOCK_ADDR -}; - enum block_id { BLOCK_GRC, BLOCK_MISCS, @@ -2006,8 +1914,6 @@ enum block_id { BLOCK_MULD, BLOCK_YULD, BLOCK_XYLD, - BLOCK_PTLD, - BLOCK_YPLD, BLOCK_PRM, BLOCK_PBF_PB1, BLOCK_PBF_PB2, @@ -2021,12 +1927,9 @@ enum block_id { BLOCK_TCFC, BLOCK_IGU, BLOCK_CAU, - BLOCK_RGFS, - BLOCK_RGSRC, - BLOCK_TGFS, - BLOCK_TGSRC, BLOCK_UMAC, BLOCK_XMAC, + BLOCK_MSTAT, BLOCK_DBG, BLOCK_NIG, BLOCK_WOL, @@ -2039,8 +1942,17 @@ enum block_id { BLOCK_LED, BLOCK_AVS_WRAP, BLOCK_PXPREQBUS, - BLOCK_MISC_AEU, BLOCK_BAR0_MAP, + BLOCK_MCP_FIO, + BLOCK_LAST_INIT, + BLOCK_PRS_FC, + BLOCK_PBF_FC, + BLOCK_NIG_LB_FC, + BLOCK_NIG_LB_FC_PLLH, + BLOCK_NIG_TX_FC_PLLH, + BLOCK_NIG_TX_FC, + BLOCK_NIG_RX_FC_PLLH, + BLOCK_NIG_RX_FC, MAX_BLOCK_ID }; @@ -2057,10 +1969,13 @@ enum bin_dbg_buffer_type { BIN_BUF_DBG_ATTN_REGS, BIN_BUF_DBG_ATTN_INDEXES, BIN_BUF_DBG_ATTN_NAME_OFFSETS, - BIN_BUF_DBG_BUS_BLOCKS, + BIN_BUF_DBG_BLOCKS, + BIN_BUF_DBG_BLOCKS_CHIP_DATA, BIN_BUF_DBG_BUS_LINES, - BIN_BUF_DBG_BUS_BLOCKS_USER_DATA, + BIN_BUF_DBG_BLOCKS_USER_DATA, + BIN_BUF_DBG_BLOCKS_CHIP_USER_DATA, BIN_BUF_DBG_BUS_LINE_NAME_OFFSETS, + BIN_BUF_DBG_RESET_REGS, BIN_BUF_DBG_PARSING_STRINGS, MAX_BIN_DBG_BUFFER_TYPE }; @@ -2144,20 +2059,54 @@ enum dbg_attn_type { MAX_DBG_ATTN_TYPE }; -/* Debug Bus block data */ -struct dbg_bus_block { - u8 num_of_lines; - u8 has_latency_events; - u16 lines_offset; +/* Block debug data */ +struct dbg_block { + u8 name[15]; + u8 associated_storm_letter; }; -/* Debug Bus block user data */ -struct dbg_bus_block_user_data { - u8 num_of_lines; +/* Chip-specific block debug data */ +struct dbg_block_chip { + u8 flags; +#define DBG_BLOCK_CHIP_IS_REMOVED_MASK 0x1 +#define DBG_BLOCK_CHIP_IS_REMOVED_SHIFT 0 +#define DBG_BLOCK_CHIP_HAS_RESET_REG_MASK 0x1 +#define DBG_BLOCK_CHIP_HAS_RESET_REG_SHIFT 1 +#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_MASK 0x1 +#define DBG_BLOCK_CHIP_UNRESET_BEFORE_DUMP_SHIFT 2 +#define DBG_BLOCK_CHIP_HAS_DBG_BUS_MASK 0x1 +#define DBG_BLOCK_CHIP_HAS_DBG_BUS_SHIFT 3 +#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_MASK 0x1 +#define DBG_BLOCK_CHIP_HAS_LATENCY_EVENTS_SHIFT 4 +#define DBG_BLOCK_CHIP_RESERVED0_MASK 0x7 +#define DBG_BLOCK_CHIP_RESERVED0_SHIFT 5 + u8 dbg_client_id; + u8 reset_reg_id; + u8 reset_reg_bit_offset; + struct dbg_mode_hdr dbg_bus_mode; + u16 reserved1; + u8 reserved2; + u8 num_of_dbg_bus_lines; + u16 dbg_bus_lines_offset; + u32 dbg_select_reg_addr; + u32 dbg_dword_enable_reg_addr; + u32 dbg_shift_reg_addr; + u32 dbg_force_valid_reg_addr; + u32 dbg_force_frame_reg_addr; +}; + +/* Chip-specific block user debug data */ +struct dbg_block_chip_user { + u8 num_of_dbg_bus_lines; u8 has_latency_events; u16 names_offset; }; +/* Block user debug data */ +struct dbg_block_user { + u8 name[16]; +}; + /* Block Debug line data */ struct dbg_bus_line { u8 data; @@ -2310,22 +2259,33 @@ enum dbg_idle_chk_severity_types { MAX_DBG_IDLE_CHK_SEVERITY_TYPES }; +/* Reset register */ +struct dbg_reset_reg { + u32 data; +#define DBG_RESET_REG_ADDR_MASK 0xFFFFFF +#define DBG_RESET_REG_ADDR_SHIFT 0 +#define DBG_RESET_REG_IS_REMOVED_MASK 0x1 +#define DBG_RESET_REG_IS_REMOVED_SHIFT 24 +#define DBG_RESET_REG_RESERVED_MASK 0x7F +#define DBG_RESET_REG_RESERVED_SHIFT 25 +}; + /* Debug Bus block data */ struct dbg_bus_block_data { - u16 data; -#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_MASK 0xF -#define DBG_BUS_BLOCK_DATA_ENABLE_MASK_SHIFT 0 -#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_MASK 0xF -#define DBG_BUS_BLOCK_DATA_RIGHT_SHIFT_SHIFT 4 -#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_MASK 0xF -#define DBG_BUS_BLOCK_DATA_FORCE_VALID_MASK_SHIFT 8 -#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_MASK 0xF -#define DBG_BUS_BLOCK_DATA_FORCE_FRAME_MASK_SHIFT 12 + u8 enable_mask; + u8 right_shift; + u8 force_valid_mask; + u8 force_frame_mask; + u8 dword_mask; u8 line_num; u8 hw_id; + u8 flags; +#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_MASK 0x1 +#define DBG_BUS_BLOCK_DATA_IS_256B_LINE_SHIFT 0 +#define DBG_BUS_BLOCK_DATA_RESERVED_MASK 0x7F +#define DBG_BUS_BLOCK_DATA_RESERVED_SHIFT 1 }; -/* Debug Bus Clients */ enum dbg_bus_clients { DBG_BUS_CLIENT_RBCN, DBG_BUS_CLIENT_RBCP, @@ -2366,11 +2326,10 @@ enum dbg_bus_constraint_ops { /* Debug Bus trigger state data */ struct dbg_bus_trigger_state_data { - u8 data; -#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_MASK 0xF -#define DBG_BUS_TRIGGER_STATE_DATA_BLOCK_SHIFTED_ENABLE_MASK_SHIFT 0 -#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_MASK 0xF -#define DBG_BUS_TRIGGER_STATE_DATA_CONSTRAINT_DWORD_MASK_SHIFT 4 + u8 msg_len; + u8 constraint_dword_mask; + u8 storm_id; + u8 reserved; }; /* Debug Bus memory address */ @@ -2420,8 +2379,7 @@ struct dbg_bus_storm_data { struct dbg_bus_data { u32 app_version; u8 state; - u8 hw_dwords; - u16 hw_id_mask; + u8 mode_256b_en; u8 num_enabled_blocks; u8 num_enabled_storms; u8 target; @@ -2432,67 +2390,21 @@ struct dbg_bus_data { u8 adding_filter; u8 filter_pre_trigger; u8 filter_post_trigger; - u16 reserved; u8 trigger_en; - struct dbg_bus_trigger_state_data trigger_states[3]; + u8 filter_constraint_dword_mask; u8 next_trigger_state; u8 next_constraint_id; - u8 unify_inputs; + struct dbg_bus_trigger_state_data trigger_states[3]; + u8 filter_msg_len; u8 rcv_from_other_engine; + u8 blocks_dword_mask; + u8 blocks_dword_overlap; + u32 hw_id_mask; struct dbg_bus_pci_buf_data pci_buf; - struct dbg_bus_block_data blocks[88]; + struct dbg_bus_block_data blocks[132]; struct dbg_bus_storm_data storms[6]; }; -/* Debug bus filter types */ -enum dbg_bus_filter_types { - DBG_BUS_FILTER_TYPE_OFF, - DBG_BUS_FILTER_TYPE_PRE, - DBG_BUS_FILTER_TYPE_POST, - DBG_BUS_FILTER_TYPE_ON, - MAX_DBG_BUS_FILTER_TYPES -}; - -/* Debug bus frame modes */ -enum dbg_bus_frame_modes { - DBG_BUS_FRAME_MODE_0HW_4ST = 0, /* 0 HW dwords, 4 Storm dwords */ - DBG_BUS_FRAME_MODE_4HW_0ST = 3, /* 4 HW dwords, 0 Storm dwords */ - DBG_BUS_FRAME_MODE_8HW_0ST = 4, /* 8 HW dwords, 0 Storm dwords */ - MAX_DBG_BUS_FRAME_MODES -}; - -/* Debug bus other engine mode */ -enum dbg_bus_other_engine_modes { - DBG_BUS_OTHER_ENGINE_MODE_NONE, - DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_TX, - DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_RX, - DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_TX, - DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_RX, - MAX_DBG_BUS_OTHER_ENGINE_MODES -}; - -/* Debug bus post-trigger recording types */ -enum dbg_bus_post_trigger_types { - DBG_BUS_POST_TRIGGER_RECORD, - DBG_BUS_POST_TRIGGER_DROP, - MAX_DBG_BUS_POST_TRIGGER_TYPES -}; - -/* Debug bus pre-trigger recording types */ -enum dbg_bus_pre_trigger_types { - DBG_BUS_PRE_TRIGGER_START_FROM_ZERO, - DBG_BUS_PRE_TRIGGER_NUM_CHUNKS, - DBG_BUS_PRE_TRIGGER_DROP, - MAX_DBG_BUS_PRE_TRIGGER_TYPES -}; - -/* Debug bus SEMI frame modes */ -enum dbg_bus_semi_frame_modes { - DBG_BUS_SEMI_FRAME_MODE_0SLOW_4FAST = 0, - DBG_BUS_SEMI_FRAME_MODE_4SLOW_0FAST = 3, - MAX_DBG_BUS_SEMI_FRAME_MODES -}; - /* Debug bus states */ enum dbg_bus_states { DBG_BUS_STATE_IDLE, @@ -2510,7 +2422,9 @@ enum dbg_bus_storm_modes { DBG_BUS_STORM_MODE_DRA_W, DBG_BUS_STORM_MODE_LD_ST_ADDR, DBG_BUS_STORM_MODE_DRA_FSM, + DBG_BUS_STORM_MODE_FAST_DBGMUX, DBG_BUS_STORM_MODE_RH, + DBG_BUS_STORM_MODE_RH_WITH_STORE, DBG_BUS_STORM_MODE_FOC, DBG_BUS_STORM_MODE_EXT_STORE, MAX_DBG_BUS_STORM_MODES @@ -2551,13 +2465,13 @@ enum dbg_grc_params { DBG_GRC_PARAM_DUMP_CAU, DBG_GRC_PARAM_DUMP_QM, DBG_GRC_PARAM_DUMP_MCP, - DBG_GRC_PARAM_MCP_TRACE_META_SIZE, + DBG_GRC_PARAM_DUMP_DORQ, DBG_GRC_PARAM_DUMP_CFC, DBG_GRC_PARAM_DUMP_IGU, DBG_GRC_PARAM_DUMP_BRB, DBG_GRC_PARAM_DUMP_BTB, DBG_GRC_PARAM_DUMP_BMB, - DBG_GRC_PARAM_DUMP_NIG, + DBG_GRC_PARAM_RESERVD1, DBG_GRC_PARAM_DUMP_MULD, DBG_GRC_PARAM_DUMP_PRS, DBG_GRC_PARAM_DUMP_DMAE, @@ -2566,8 +2480,8 @@ enum dbg_grc_params { DBG_GRC_PARAM_DUMP_DIF, DBG_GRC_PARAM_DUMP_STATIC, DBG_GRC_PARAM_UNSTALL, - DBG_GRC_PARAM_NUM_LCIDS, - DBG_GRC_PARAM_NUM_LTIDS, + DBG_GRC_PARAM_RESERVED2, + DBG_GRC_PARAM_MCP_TRACE_META_SIZE, DBG_GRC_PARAM_EXCLUDE_ALL, DBG_GRC_PARAM_CRASH, DBG_GRC_PARAM_PARITY_SAFE, @@ -2583,19 +2497,6 @@ enum dbg_grc_params { MAX_DBG_GRC_PARAMS }; -/* Debug reset registers */ -enum dbg_reset_regs { - DBG_RESET_REG_MISCS_PL_UA, - DBG_RESET_REG_MISCS_PL_HV, - DBG_RESET_REG_MISCS_PL_HV_2, - DBG_RESET_REG_MISC_PL_UA, - DBG_RESET_REG_MISC_PL_HV, - DBG_RESET_REG_MISC_PL_PDA_VMAIN_1, - DBG_RESET_REG_MISC_PL_PDA_VMAIN_2, - DBG_RESET_REG_MISC_PL_PDA_VAUX, - MAX_DBG_RESET_REGS -}; - /* Debug status codes */ enum dbg_status { DBG_STATUS_OK, @@ -2607,15 +2508,15 @@ enum dbg_status { DBG_STATUS_INVALID_PCI_BUF_SIZE, DBG_STATUS_PCI_BUF_ALLOC_FAILED, DBG_STATUS_PCI_BUF_NOT_ALLOCATED, - DBG_STATUS_TOO_MANY_INPUTS, - DBG_STATUS_INPUT_OVERLAP, - DBG_STATUS_HW_ONLY_RECORDING, + DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS, + DBG_STATUS_NO_MATCHING_FRAMING_MODE, + DBG_STATUS_VFC_READ_ERROR, DBG_STATUS_STORM_ALREADY_ENABLED, DBG_STATUS_STORM_NOT_ENABLED, DBG_STATUS_BLOCK_ALREADY_ENABLED, DBG_STATUS_BLOCK_NOT_ENABLED, DBG_STATUS_NO_INPUT_ENABLED, - DBG_STATUS_NO_FILTER_TRIGGER_64B, + DBG_STATUS_NO_FILTER_TRIGGER_256B, DBG_STATUS_FILTER_ALREADY_ENABLED, DBG_STATUS_TRIGGER_ALREADY_ENABLED, DBG_STATUS_TRIGGER_NOT_ENABLED, @@ -2640,7 +2541,7 @@ enum dbg_status { DBG_STATUS_MCP_TRACE_NO_META, DBG_STATUS_MCP_COULD_NOT_HALT, DBG_STATUS_MCP_COULD_NOT_RESUME, - DBG_STATUS_RESERVED2, + DBG_STATUS_RESERVED0, DBG_STATUS_SEMI_FIFO_NOT_EMPTY, DBG_STATUS_IGU_FIFO_BAD_DATA, DBG_STATUS_MCP_COULD_NOT_MASK_PRTY, @@ -2648,10 +2549,15 @@ enum dbg_status { DBG_STATUS_REG_FIFO_BAD_DATA, DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA, DBG_STATUS_DBG_ARRAY_NOT_SET, - DBG_STATUS_FILTER_BUG, + DBG_STATUS_RESERVED1, DBG_STATUS_NON_MATCHING_LINES, - DBG_STATUS_INVALID_TRIGGER_DWORD_OFFSET, + DBG_STATUS_INSUFFICIENT_HW_IDS, DBG_STATUS_DBG_BUS_IN_USE, + DBG_STATUS_INVALID_STORM_DBG_MODE, + DBG_STATUS_OTHER_ENGINE_BB_ONLY, + DBG_STATUS_FILTER_SINGLE_HW_ID, + DBG_STATUS_TRIGGER_SINGLE_HW_ID, + DBG_STATUS_MISSING_TRIGGER_STATE_STORM, MAX_DBG_STATUS }; @@ -2687,9 +2593,9 @@ struct dbg_tools_data { struct dbg_bus_data bus; struct idle_chk_data idle_chk; u8 mode_enable[40]; - u8 block_in_reset[88]; + u8 block_in_reset[132]; u8 chip_id; - u8 platform_id; + u8 hw_type; u8 num_ports; u8 num_pfs_per_port; u8 num_vfs; @@ -2808,7 +2714,6 @@ struct init_qm_vport_params { enum chip_ids { CHIP_BB, CHIP_K2, - CHIP_RESERVED, MAX_CHIP_IDS }; @@ -3135,9 +3040,11 @@ struct iro { * @brief qed_dbg_set_bin_ptr - Sets a pointer to the binary data with debug * arrays. * + * @param p_hwfn - HW device data * @param bin_ptr - a pointer to the binary data with debug arrays. */ -enum dbg_status qed_dbg_set_bin_ptr(const u8 * const bin_ptr); +enum dbg_status qed_dbg_set_bin_ptr(struct qed_hwfn *p_hwfn, + const u8 * const bin_ptr); /** * @brief qed_read_regs - Reads registers into a buffer (using GRC). @@ -3181,7 +3088,6 @@ bool qed_read_fw_info(struct qed_hwfn *p_hwfn, * - val is outside the allowed boundaries */ enum dbg_status qed_dbg_grc_config(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, enum dbg_grc_params grc_param, u32 val); /** @@ -3502,20 +3408,36 @@ enum dbg_status qed_dbg_print_attn(struct qed_hwfn *p_hwfn, struct mcp_trace_format { u32 data; #define MCP_TRACE_FORMAT_MODULE_MASK 0x0000ffff -#define MCP_TRACE_FORMAT_MODULE_SHIFT 0 +#define MCP_TRACE_FORMAT_MODULE_OFFSET 0 #define MCP_TRACE_FORMAT_LEVEL_MASK 0x00030000 -#define MCP_TRACE_FORMAT_LEVEL_SHIFT 16 +#define MCP_TRACE_FORMAT_LEVEL_OFFSET 16 #define MCP_TRACE_FORMAT_P1_SIZE_MASK 0x000c0000 -#define MCP_TRACE_FORMAT_P1_SIZE_SHIFT 18 +#define MCP_TRACE_FORMAT_P1_SIZE_OFFSET 18 #define MCP_TRACE_FORMAT_P2_SIZE_MASK 0x00300000 -#define MCP_TRACE_FORMAT_P2_SIZE_SHIFT 20 +#define MCP_TRACE_FORMAT_P2_SIZE_OFFSET 20 #define MCP_TRACE_FORMAT_P3_SIZE_MASK 0x00c00000 -#define MCP_TRACE_FORMAT_P3_SIZE_SHIFT 22 +#define MCP_TRACE_FORMAT_P3_SIZE_OFFSET 22 #define MCP_TRACE_FORMAT_LEN_MASK 0xff000000 -#define MCP_TRACE_FORMAT_LEN_SHIFT 24 +#define MCP_TRACE_FORMAT_LEN_OFFSET 24 + char *format_str; }; +/* MCP Trace Meta data structure */ +struct mcp_trace_meta { + u32 modules_num; + char **modules; + u32 formats_num; + struct mcp_trace_format *formats; + bool is_allocated; +}; + +/* Debug Tools user data */ +struct dbg_tools_user_data { + struct mcp_trace_meta mcp_trace_meta; + const u32 *mcp_trace_user_meta_buf; +}; + /******************************** Constants **********************************/ #define MAX_NAME_LEN 16 @@ -3526,16 +3448,20 @@ struct mcp_trace_format { * @brief qed_dbg_user_set_bin_ptr - Sets a pointer to the binary data with * debug arrays. * + * @param p_hwfn - HW device data * @param bin_ptr - a pointer to the binary data with debug arrays. */ -enum dbg_status qed_dbg_user_set_bin_ptr(const u8 * const bin_ptr); +enum dbg_status qed_dbg_user_set_bin_ptr(struct qed_hwfn *p_hwfn, + const u8 * const bin_ptr); /** * @brief qed_dbg_alloc_user_data - Allocates user debug data. * * @param p_hwfn - HW device data + * @param user_data_ptr - OUT: a pointer to the allocated memory. */ -enum dbg_status qed_dbg_alloc_user_data(struct qed_hwfn *p_hwfn); +enum dbg_status qed_dbg_alloc_user_data(struct qed_hwfn *p_hwfn, + void **user_data_ptr); /** * @brief qed_dbg_get_status_str - Returns a string for the specified status. @@ -3808,271 +3734,6 @@ enum dbg_status qed_print_fw_asserts_results(struct qed_hwfn *p_hwfn, enum dbg_status qed_dbg_parse_attn(struct qed_hwfn *p_hwfn, struct dbg_attn_block_result *results); -/* Debug Bus blocks */ -static const u32 dbg_bus_blocks[] = { - 0x0000000f, /* grc, bb, 15 lines */ - 0x0000000f, /* grc, k2, 15 lines */ - 0x00000000, - 0x00000000, /* miscs, bb, 0 lines */ - 0x00000000, /* miscs, k2, 0 lines */ - 0x00000000, - 0x00000000, /* misc, bb, 0 lines */ - 0x00000000, /* misc, k2, 0 lines */ - 0x00000000, - 0x00000000, /* dbu, bb, 0 lines */ - 0x00000000, /* dbu, k2, 0 lines */ - 0x00000000, - 0x000f0127, /* pglue_b, bb, 39 lines */ - 0x0036012a, /* pglue_b, k2, 42 lines */ - 0x00000000, - 0x00000000, /* cnig, bb, 0 lines */ - 0x00120102, /* cnig, k2, 2 lines */ - 0x00000000, - 0x00000000, /* cpmu, bb, 0 lines */ - 0x00000000, /* cpmu, k2, 0 lines */ - 0x00000000, - 0x00000001, /* ncsi, bb, 1 lines */ - 0x00000001, /* ncsi, k2, 1 lines */ - 0x00000000, - 0x00000000, /* opte, bb, 0 lines */ - 0x00000000, /* opte, k2, 0 lines */ - 0x00000000, - 0x00600085, /* bmb, bb, 133 lines */ - 0x00600085, /* bmb, k2, 133 lines */ - 0x00000000, - 0x00000000, /* pcie, bb, 0 lines */ - 0x00e50033, /* pcie, k2, 51 lines */ - 0x00000000, - 0x00000000, /* mcp, bb, 0 lines */ - 0x00000000, /* mcp, k2, 0 lines */ - 0x00000000, - 0x01180009, /* mcp2, bb, 9 lines */ - 0x01180009, /* mcp2, k2, 9 lines */ - 0x00000000, - 0x01210104, /* pswhst, bb, 4 lines */ - 0x01210104, /* pswhst, k2, 4 lines */ - 0x00000000, - 0x01250103, /* pswhst2, bb, 3 lines */ - 0x01250103, /* pswhst2, k2, 3 lines */ - 0x00000000, - 0x00340101, /* pswrd, bb, 1 lines */ - 0x00340101, /* pswrd, k2, 1 lines */ - 0x00000000, - 0x01280119, /* pswrd2, bb, 25 lines */ - 0x01280119, /* pswrd2, k2, 25 lines */ - 0x00000000, - 0x01410109, /* pswwr, bb, 9 lines */ - 0x01410109, /* pswwr, k2, 9 lines */ - 0x00000000, - 0x00000000, /* pswwr2, bb, 0 lines */ - 0x00000000, /* pswwr2, k2, 0 lines */ - 0x00000000, - 0x001c0001, /* pswrq, bb, 1 lines */ - 0x001c0001, /* pswrq, k2, 1 lines */ - 0x00000000, - 0x014a0015, /* pswrq2, bb, 21 lines */ - 0x014a0015, /* pswrq2, k2, 21 lines */ - 0x00000000, - 0x00000000, /* pglcs, bb, 0 lines */ - 0x00120006, /* pglcs, k2, 6 lines */ - 0x00000000, - 0x00100001, /* dmae, bb, 1 lines */ - 0x00100001, /* dmae, k2, 1 lines */ - 0x00000000, - 0x015f0105, /* ptu, bb, 5 lines */ - 0x015f0105, /* ptu, k2, 5 lines */ - 0x00000000, - 0x01640120, /* tcm, bb, 32 lines */ - 0x01640120, /* tcm, k2, 32 lines */ - 0x00000000, - 0x01640120, /* mcm, bb, 32 lines */ - 0x01640120, /* mcm, k2, 32 lines */ - 0x00000000, - 0x01640120, /* ucm, bb, 32 lines */ - 0x01640120, /* ucm, k2, 32 lines */ - 0x00000000, - 0x01640120, /* xcm, bb, 32 lines */ - 0x01640120, /* xcm, k2, 32 lines */ - 0x00000000, - 0x01640120, /* ycm, bb, 32 lines */ - 0x01640120, /* ycm, k2, 32 lines */ - 0x00000000, - 0x01640120, /* pcm, bb, 32 lines */ - 0x01640120, /* pcm, k2, 32 lines */ - 0x00000000, - 0x01840062, /* qm, bb, 98 lines */ - 0x01840062, /* qm, k2, 98 lines */ - 0x00000000, - 0x01e60021, /* tm, bb, 33 lines */ - 0x01e60021, /* tm, k2, 33 lines */ - 0x00000000, - 0x02070107, /* dorq, bb, 7 lines */ - 0x02070107, /* dorq, k2, 7 lines */ - 0x00000000, - 0x00600185, /* brb, bb, 133 lines */ - 0x00600185, /* brb, k2, 133 lines */ - 0x00000000, - 0x020e0019, /* src, bb, 25 lines */ - 0x020c001a, /* src, k2, 26 lines */ - 0x00000000, - 0x02270104, /* prs, bb, 4 lines */ - 0x02270104, /* prs, k2, 4 lines */ - 0x00000000, - 0x022b0133, /* tsdm, bb, 51 lines */ - 0x022b0133, /* tsdm, k2, 51 lines */ - 0x00000000, - 0x022b0133, /* msdm, bb, 51 lines */ - 0x022b0133, /* msdm, k2, 51 lines */ - 0x00000000, - 0x022b0133, /* usdm, bb, 51 lines */ - 0x022b0133, /* usdm, k2, 51 lines */ - 0x00000000, - 0x022b0133, /* xsdm, bb, 51 lines */ - 0x022b0133, /* xsdm, k2, 51 lines */ - 0x00000000, - 0x022b0133, /* ysdm, bb, 51 lines */ - 0x022b0133, /* ysdm, k2, 51 lines */ - 0x00000000, - 0x022b0133, /* psdm, bb, 51 lines */ - 0x022b0133, /* psdm, k2, 51 lines */ - 0x00000000, - 0x025e010c, /* tsem, bb, 12 lines */ - 0x025e010c, /* tsem, k2, 12 lines */ - 0x00000000, - 0x025e010c, /* msem, bb, 12 lines */ - 0x025e010c, /* msem, k2, 12 lines */ - 0x00000000, - 0x025e010c, /* usem, bb, 12 lines */ - 0x025e010c, /* usem, k2, 12 lines */ - 0x00000000, - 0x025e010c, /* xsem, bb, 12 lines */ - 0x025e010c, /* xsem, k2, 12 lines */ - 0x00000000, - 0x025e010c, /* ysem, bb, 12 lines */ - 0x025e010c, /* ysem, k2, 12 lines */ - 0x00000000, - 0x025e010c, /* psem, bb, 12 lines */ - 0x025e010c, /* psem, k2, 12 lines */ - 0x00000000, - 0x026a000d, /* rss, bb, 13 lines */ - 0x026a000d, /* rss, k2, 13 lines */ - 0x00000000, - 0x02770106, /* tmld, bb, 6 lines */ - 0x02770106, /* tmld, k2, 6 lines */ - 0x00000000, - 0x027d0106, /* muld, bb, 6 lines */ - 0x027d0106, /* muld, k2, 6 lines */ - 0x00000000, - 0x02770005, /* yuld, bb, 5 lines */ - 0x02770005, /* yuld, k2, 5 lines */ - 0x00000000, - 0x02830107, /* xyld, bb, 7 lines */ - 0x027d0107, /* xyld, k2, 7 lines */ - 0x00000000, - 0x00000000, /* ptld, bb, 0 lines */ - 0x00000000, /* ptld, k2, 0 lines */ - 0x00000000, - 0x00000000, /* ypld, bb, 0 lines */ - 0x00000000, /* ypld, k2, 0 lines */ - 0x00000000, - 0x028a010e, /* prm, bb, 14 lines */ - 0x02980110, /* prm, k2, 16 lines */ - 0x00000000, - 0x02a8000d, /* pbf_pb1, bb, 13 lines */ - 0x02a8000d, /* pbf_pb1, k2, 13 lines */ - 0x00000000, - 0x02a8000d, /* pbf_pb2, bb, 13 lines */ - 0x02a8000d, /* pbf_pb2, k2, 13 lines */ - 0x00000000, - 0x02a8000d, /* rpb, bb, 13 lines */ - 0x02a8000d, /* rpb, k2, 13 lines */ - 0x00000000, - 0x00600185, /* btb, bb, 133 lines */ - 0x00600185, /* btb, k2, 133 lines */ - 0x00000000, - 0x02b50117, /* pbf, bb, 23 lines */ - 0x02b50117, /* pbf, k2, 23 lines */ - 0x00000000, - 0x02cc0006, /* rdif, bb, 6 lines */ - 0x02cc0006, /* rdif, k2, 6 lines */ - 0x00000000, - 0x02d20006, /* tdif, bb, 6 lines */ - 0x02d20006, /* tdif, k2, 6 lines */ - 0x00000000, - 0x02d80003, /* cdu, bb, 3 lines */ - 0x02db000e, /* cdu, k2, 14 lines */ - 0x00000000, - 0x02e9010d, /* ccfc, bb, 13 lines */ - 0x02f60117, /* ccfc, k2, 23 lines */ - 0x00000000, - 0x02e9010d, /* tcfc, bb, 13 lines */ - 0x02f60117, /* tcfc, k2, 23 lines */ - 0x00000000, - 0x030d0133, /* igu, bb, 51 lines */ - 0x030d0133, /* igu, k2, 51 lines */ - 0x00000000, - 0x03400106, /* cau, bb, 6 lines */ - 0x03400106, /* cau, k2, 6 lines */ - 0x00000000, - 0x00000000, /* rgfs, bb, 0 lines */ - 0x00000000, /* rgfs, k2, 0 lines */ - 0x00000000, - 0x00000000, /* rgsrc, bb, 0 lines */ - 0x00000000, /* rgsrc, k2, 0 lines */ - 0x00000000, - 0x00000000, /* tgfs, bb, 0 lines */ - 0x00000000, /* tgfs, k2, 0 lines */ - 0x00000000, - 0x00000000, /* tgsrc, bb, 0 lines */ - 0x00000000, /* tgsrc, k2, 0 lines */ - 0x00000000, - 0x00000000, /* umac, bb, 0 lines */ - 0x00120006, /* umac, k2, 6 lines */ - 0x00000000, - 0x00000000, /* xmac, bb, 0 lines */ - 0x00000000, /* xmac, k2, 0 lines */ - 0x00000000, - 0x00000000, /* dbg, bb, 0 lines */ - 0x00000000, /* dbg, k2, 0 lines */ - 0x00000000, - 0x0346012b, /* nig, bb, 43 lines */ - 0x0346011d, /* nig, k2, 29 lines */ - 0x00000000, - 0x00000000, /* wol, bb, 0 lines */ - 0x001c0002, /* wol, k2, 2 lines */ - 0x00000000, - 0x00000000, /* bmbn, bb, 0 lines */ - 0x00210008, /* bmbn, k2, 8 lines */ - 0x00000000, - 0x00000000, /* ipc, bb, 0 lines */ - 0x00000000, /* ipc, k2, 0 lines */ - 0x00000000, - 0x00000000, /* nwm, bb, 0 lines */ - 0x0371000b, /* nwm, k2, 11 lines */ - 0x00000000, - 0x00000000, /* nws, bb, 0 lines */ - 0x037c0009, /* nws, k2, 9 lines */ - 0x00000000, - 0x00000000, /* ms, bb, 0 lines */ - 0x00120004, /* ms, k2, 4 lines */ - 0x00000000, - 0x00000000, /* phy_pcie, bb, 0 lines */ - 0x00e5001a, /* phy_pcie, k2, 26 lines */ - 0x00000000, - 0x00000000, /* led, bb, 0 lines */ - 0x00000000, /* led, k2, 0 lines */ - 0x00000000, - 0x00000000, /* avs_wrap, bb, 0 lines */ - 0x00000000, /* avs_wrap, k2, 0 lines */ - 0x00000000, - 0x00000000, /* bar0_map, bb, 0 lines */ - 0x00000000, /* bar0_map, k2, 0 lines */ - 0x00000000, - 0x00000000, /* bar0_map, bb, 0 lines */ - 0x00000000, /* bar0_map, k2, 0 lines */ - 0x00000000, -}; - /* Win 2 */ #define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL @@ -4086,22 +3747,28 @@ static const u32 dbg_bus_blocks[] = { #define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL /* Win 6 */ -#define GTT_BAR0_MAP_REG_USDM_RAM 0x013000UL +#define GTT_BAR0_MAP_REG_MSDM_RAM_2048 0x013000UL /* Win 7 */ -#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x014000UL +#define GTT_BAR0_MAP_REG_USDM_RAM 0x014000UL /* Win 8 */ -#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x015000UL +#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x015000UL /* Win 9 */ -#define GTT_BAR0_MAP_REG_XSDM_RAM 0x016000UL +#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x016000UL /* Win 10 */ -#define GTT_BAR0_MAP_REG_YSDM_RAM 0x017000UL +#define GTT_BAR0_MAP_REG_XSDM_RAM 0x017000UL /* Win 11 */ -#define GTT_BAR0_MAP_REG_PSDM_RAM 0x018000UL +#define GTT_BAR0_MAP_REG_XSDM_RAM_1024 0x018000UL + +/* Win 12 */ +#define GTT_BAR0_MAP_REG_YSDM_RAM 0x019000UL + +/* Win 13 */ +#define GTT_BAR0_MAP_REG_PSDM_RAM 0x01a000UL /** * @brief qed_qm_pf_mem_size - prepare QM ILT sizes @@ -11913,7 +11580,7 @@ struct e4_ystorm_iscsi_conn_ag_ctx { /* The trace in the buffer */ #define MFW_TRACE_EVENTID_MASK 0x00ffff #define MFW_TRACE_PRM_SIZE_MASK 0x0f0000 -#define MFW_TRACE_PRM_SIZE_SHIFT 16 +#define MFW_TRACE_PRM_SIZE_OFFSET 16 #define MFW_TRACE_ENTRY_SIZE 3 struct mcp_trace { @@ -12866,7 +12533,10 @@ struct public_drv_mb { #define DRV_MB_PARAM_DCBX_NOTIFY_SHIFT 3 #define DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MBI 0x3 +#define DRV_MB_PARAM_NVM_OFFSET_OFFSET 0 +#define DRV_MB_PARAM_NVM_OFFSET_MASK 0x00FFFFFF #define DRV_MB_PARAM_NVM_LEN_OFFSET 24 +#define DRV_MB_PARAM_NVM_LEN_MASK 0xFF000000 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT 0 #define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK 0x000000FF @@ -13627,6 +13297,21 @@ enum nvm_image_type { NVM_TYPE_FCOE_CFG = 0x1f, NVM_TYPE_ETH_PHY_FW1 = 0x20, NVM_TYPE_ETH_PHY_FW2 = 0x21, + NVM_TYPE_BDN = 0x22, + NVM_TYPE_8485X_PHY_FW = 0x23, + NVM_TYPE_PUB_KEY = 0x24, + NVM_TYPE_RECOVERY = 0x25, + NVM_TYPE_PLDM = 0x26, + NVM_TYPE_UPK1 = 0x27, + NVM_TYPE_UPK2 = 0x28, + NVM_TYPE_MASTER_KC = 0x29, + NVM_TYPE_BACKUP_KC = 0x2a, + NVM_TYPE_HW_DUMP = 0x2b, + NVM_TYPE_HW_DUMP_OUT = 0x2c, + NVM_TYPE_BIN_NVM_META = 0x30, + NVM_TYPE_ROM_TEST = 0xf0, + NVM_TYPE_88X33X0_PHY_FW = 0x31, + NVM_TYPE_88X33X0_PHY_SLAVE_FW = 0x32, NVM_TYPE_MAX, }; diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c index 38f7f40b3a4d..2c189c637cca 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -2637,7 +2637,7 @@ static int qed_set_grc_config(struct qed_dev *cdev, u32 cfg_id, u32 val) if (!ptt) return -EAGAIN; - rc = qed_dbg_grc_config(hwfn, ptt, cfg_id, val); + rc = qed_dbg_grc_config(hwfn, cfg_id, val); qed_ptt_release(hwfn, ptt); diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h index 1b27c22d39af..8f29e0d8a7b3 100644 --- a/include/linux/qed/qed_if.h +++ b/include/linux/qed/qed_if.h @@ -1178,6 +1178,15 @@ struct qed_common_ops { #define GET_FIELD(value, name) \ (((value) >> (name ## _SHIFT)) & name ## _MASK) +#define GET_MFW_FIELD(name, field) \ + (((name) & (field ## _MASK)) >> (field ## _OFFSET)) + +#define SET_MFW_FIELD(name, field, value) \ + do { \ + (name) &= ~(field ## _MASK); \ + (name) |= (((value) << (field ## _OFFSET)) & (field ## _MASK));\ + } while (0) + #define DB_ADDR_SHIFT(addr) ((addr) << DB_PWM_ADDR_OFFSET_SHIFT) /* Debug print definitions */ -- cgit v1.2.3-71-gd317 From 6cd021a58c18a1731f7e47f83e172c0c302d65e5 Mon Sep 17 00:00:00 2001 From: Willem de Bruijn Date: Mon, 27 Jan 2020 15:40:31 -0500 Subject: udp: segment looped gso packets correctly Multicast and broadcast packets can be looped from egress to ingress pre segmentation with dev_loopback_xmit. That function unconditionally sets ip_summed to CHECKSUM_UNNECESSARY. udp_rcv_segment segments gso packets in the udp rx path. Segmentation usually executes on egress, and does not expect packets of this type. __udp_gso_segment interprets !CHECKSUM_PARTIAL as CHECKSUM_NONE. But the offsets are not correct for gso_make_checksum. UDP GSO packets are of type CHECKSUM_PARTIAL, with their uh->check set to the correct pseudo header checksum. Reset ip_summed to this type. (CHECKSUM_PARTIAL is allowed on ingress, see comments in skbuff.h) Reported-by: syzbot Fixes: cf329aa42b66 ("udp: cope with UDP GRO packet misdirection") Signed-off-by: Willem de Bruijn Signed-off-by: David S. Miller --- include/net/udp.h | 3 +++ 1 file changed, 3 insertions(+) (limited to 'include') diff --git a/include/net/udp.h b/include/net/udp.h index 44e0e52b585c..4a180f2a13e3 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -476,6 +476,9 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk, if (!inet_get_convert_csum(sk)) features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; + if (skb->pkt_type == PACKET_LOOPBACK) + skb->ip_summed = CHECKSUM_PARTIAL; + /* the GSO CB lays after the UDP one, no need to save and restore any * CB fragment */ -- cgit v1.2.3-71-gd317