[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHUqR5_NoU8BYbz5@mini-arch>
Date: Mon, 14 Jul 2025 09:03:19 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com,
sdf@...ichev.me, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com, joe@...a.to,
willemdebruijn.kernel@...il.com, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next] xsk: skip validating skb list in xmit path
On 07/13, Jason Xing wrote:
> From: Jason Xing <kernelxing@...cent.com>
>
> For xsk, it's not needed to validate and check the skb in
> validate_xmit_skb_list() in copy mode because xsk_build_skb() doesn't
> and doesn't need to prepare those requisites to validate. Xsk is just
> responsible for delivering raw data from userspace to the driver.
So the __dev_direct_xmit was taken out of af_packet in commit 865b03f21162
("dev: packet: make packet_direct_xmit a common function"). And a call
to validate_xmit_skb_list was added in 104ba78c9880 ("packet: on direct_xmit,
limit tso and csum to supported devices") to support TSO. Since we don't
support tso/vlan offloads in xsk_build_skb, removing validate_xmit_skb_list
seems fair.
Although, again, if you care about performance, why not use zerocopy
mode?
> Skipping numerous checks somehow contributes to the transmission
> especially in the extremely hot path.
>
> Performance-wise, I used './xdpsock -i enp2s0f0np0 -t -S -s 64' to verify
> the guess and then measured on the machine with ixgbe driver. It stably
> goes up by 5.48%, which can be seen in the shown below:
> Before:
> sock0@...2s0f0np0:0 txonly xdp-skb
> pps pkts 1.00
> rx 0 0
> tx 1,187,410 3,513,536
> After:
> sock0@...2s0f0np0:0 txonly xdp-skb
> pps pkts 1.00
> rx 0 0
> tx 1,252,590 2,459,456
>
> This patch also removes total ~4% consumption which can be observed
> by perf:
> |--2.97%--validate_xmit_skb
> | |
> | --1.76%--netif_skb_features
> | |
> | --0.65%--skb_network_protocol
> |
> |--1.06%--validate_xmit_xfrm
It is a bit surprising that mostly no-op validate_xmit_skb_list takes
4% of the cycles. netif_skb_features taking ~2%? Any idea why? Is
it unoptimized kernel? Which driver is it?
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
> ---
> include/linux/netdevice.h | 4 ++--
> net/core/dev.c | 10 ++++++----
> net/xdp/xsk.c | 2 +-
> 3 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index a80d21a14612..2df44c22406c 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -3351,7 +3351,7 @@ u16 dev_pick_tx_zero(struct net_device *dev, struct sk_buff *skb,
> struct net_device *sb_dev);
>
> int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev);
> -int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id);
> +int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id, bool validate);
>
> static inline int dev_queue_xmit(struct sk_buff *skb)
> {
> @@ -3368,7 +3368,7 @@ static inline int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
> {
> int ret;
>
> - ret = __dev_direct_xmit(skb, queue_id);
> + ret = __dev_direct_xmit(skb, queue_id, true);
> if (!dev_xmit_complete(ret))
> kfree_skb(skb);
> return ret;
Implementation wise, will it be better if we move a call to validate_xmit_skb_list
from __dev_direct_xmit to dev_direct_xmit (and a few other callers of
__dev_direct_xmit)? This will avoid the new flag.
Powered by blists - more mailing lists