[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJ-Xqb2uOZwyatq-6gMHPVt0xga_dypiF_X8Z_L0eao4w@mail.gmail.com>
Date: Fri, 22 Aug 2025 06:20:25 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Balazs Scheidler <bazsi77@...il.com>
Cc: netdev@...r.kernel.org, pabeni@...hat.com
Subject: Re: [RFC, RESEND] UDP receive path batching improvement
On Fri, Aug 22, 2025 at 6:10 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
>
> Can you post
>
> ss -aum src :1000 <replace 1000 with your UDP source port>
>
> We will check the dXXXX output (number of drops), per socket.
Small experiment :
otrv5:/home/edumazet# ./super_netperf 10 -t UDP_STREAM -H otrv6 -l10
-- -n -P,1000 -m 1200
4304
If I remove the problematic sk_drops update :
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index efd742279289fc13aec9369d0f01a3be3aa73151..8976399d4e52f21058f74fde13d46e35c7617deb
100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1575,7 +1575,8 @@ int __udp_enqueue_schedule_skb(struct sock *sk,
struct sk_buff *skb)
atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
drop:
- atomic_inc(&sk->sk_drops);
+// Find a better way to make this operation not too expensive.
+// atomic_inc(&sk->sk_drops);
busylock_release(busy);
return err;
}
otrv5:/home/edumazet# ./super_netperf 10 -t UDP_STREAM -H otrv6 -l10
-- -n -P,1000 -m 1200
6076
So there is definitely room for a big improvement here.
Powered by blists - more mailing lists