[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <665f9d3ba5a1a_2c0e4d29423@willemb.c.googlers.com.notmuch>
Date: Tue, 04 Jun 2024 19:03:23 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Magnus Karlsson <magnus.karlsson@...il.com>,
magnus.karlsson@...el.com,
bjorn@...nel.org,
ast@...nel.org,
daniel@...earbox.net,
netdev@...r.kernel.org,
maciej.fijalkowski@...el.com,
bpf@...r.kernel.org
Cc: Magnus Karlsson <magnus.karlsson@...il.com>,
YuvalE@...ware.com
Subject: Re: [PATCH bpf 0/2] Revert "xsk: support redirect to any socket bound
to the same umem"
Magnus Karlsson wrote:
> Revert "xsk: support redirect to any socket bound to the same umem"
>
> This patch introduced a potential kernel crash when multiple napi
> instances redirect to the same AF_XDP socket. By removing the
> queue_index check, it is possible for multiple napi instances to
> access the Rx ring at the same time, which will result in a corrupted
> ring state which can lead to a crash when flushing the rings in
> __xsk_flush(). This can happen when the linked list of sockets to
> flush gets corrupted by concurrent accesses. A quick and small fix is
> unfortunately not possible, so let us revert this for now.
This is a very useful feature, to be able to use AF_XDP sockets with
a standard RSS nic configuration.
Not all AF_XDP use cases require the absolute highest packet rate.
Can this be addressed with an optional spinlock on the RxQ, only for
this case?
If there is no simple enough fix in the short term, do you plan to
reintroduce this in another form later?
Powered by blists - more mailing lists