lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20231223025554.2316836-27-aleksander.lobakin@intel.com> Date: Sat, 23 Dec 2023 03:55:46 +0100 From: Alexander Lobakin <aleksander.lobakin@...el.com> To: "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com> Cc: Alexander Lobakin <aleksander.lobakin@...el.com>, Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Michal Kubiak <michal.kubiak@...el.com>, Larysa Zaremba <larysa.zaremba@...el.com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Willem de Bruijn <willemdebruijn.kernel@...il.com>, intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org Subject: [PATCH RFC net-next 26/34] xdp: add generic XSk xdp_buff -> skb conversion Same as with converting &xdp_buff to skb on Rx, the code which allocates a new skb and copies the XSk frame there is identical across the drivers, so make it generic. Note that this time skb_record_rx_queue() is called unconditionally, as it's not intended to call this function with a non-registered RxQ info. Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com> --- include/net/xdp.h | 11 ++++++++++- net/core/xdp.c | 41 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 66854b755b58..23ada4bb0e69 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -273,7 +273,16 @@ void xdp_warn(const char *msg, const char *func, const int line); struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, const struct xdp_buff *xdp); -#define xdp_build_skb_from_buff(xdp) __xdp_build_skb_from_buff(NULL, xdp) +struct sk_buff *xdp_build_skb_from_zc(struct napi_struct *napi, + struct xdp_buff *xdp); + +static inline struct sk_buff *xdp_build_skb_from_buff(struct xdp_buff *xdp) +{ + if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) + return xdp_build_skb_from_zc(NULL, xdp); + + return __xdp_build_skb_from_buff(NULL, xdp); +} struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, diff --git a/net/core/xdp.c b/net/core/xdp.c index 8ef1d735a7eb..2bdb1fb8a9b8 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -21,6 +21,8 @@ #include <trace/events/xdp.h> #include <net/xdp_sock_drv.h> +#include "dev.h" + #define REG_STATE_NEW 0x0 #define REG_STATE_REGISTERED 0x1 #define REG_STATE_UNREGISTERED 0x2 @@ -647,6 +649,45 @@ struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, } EXPORT_SYMBOL_GPL(__xdp_build_skb_from_buff); +struct sk_buff *xdp_build_skb_from_zc(struct napi_struct *napi, + struct xdp_buff *xdp) +{ + const struct xdp_rxq_info *rxq = xdp->rxq; + u32 totalsize, metasize; + struct sk_buff *skb; + + if (!napi) { + napi = napi_by_id(rxq->napi_id); + if (unlikely(!napi)) + return NULL; + } + + totalsize = xdp->data_end - xdp->data_meta; + + skb = __napi_alloc_skb(napi, totalsize, GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + net_prefetch(xdp->data_meta); + + memcpy(__skb_put(skb, totalsize), xdp->data_meta, + ALIGN(totalsize, sizeof(long))); + + metasize = xdp->data - xdp->data_meta; + if (metasize) { + skb_metadata_set(skb, metasize); + __skb_pull(skb, metasize); + } + + skb_record_rx_queue(skb, rxq->queue_index); + skb->protocol = eth_type_trans(skb, rxq->dev); + + xsk_buff_free(xdp); + + return skb; +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_zc); + struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) -- 2.43.0
Powered by blists - more mailing lists