[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b38257b-c968-4128-bf4f-1a677da37972@redhat.com>
Date: Mon, 22 Sep 2025 11:27:58 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski
<kuba@...nel.org>, Simon Horman <horms@...nel.org>,
Willem de Bruijn <willemb@...gle.com>, Kuniyuki Iwashima
<kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH v3 net-next] udp: remove busylock and add per NUMA queues
On 9/22/25 10:47 AM, Eric Dumazet wrote:
> On Mon, Sep 22, 2025 at 1:37 AM Paolo Abeni <pabeni@...hat.com> wrote:
>> What if the user-space process never reads the packets (or is very
>> slow)? I'm under the impression the max rcvbuf occupation will be
>> limited only by the memory accounting?!? (and not by sk_rcvbuf)
>
> Well, as soon as sk->sk_rmem_alloc is bigger than sk_rcvbuf, all
> further incoming packets are dropped.
>
> As you said, memory accounting is there.
>
> This could matter if we had thousands of UDP sockets under flood at
> the same time,
> but that would require thousands of cpus and/or NIC rx queues.
Ah, I initially misread:
rmem += atomic_read(&udp_prod_queue->rmem_alloc);
as
rmem = atomic_read(&udp_prod_queue->rmem_alloc);
and was fooled on the overall boundary check. LGTM now, thanks!
Paolo
Powered by blists - more mailing lists