lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6660bc547b59b_35916d294d1@willemb.c.googlers.com.notmuch>
Date: Wed, 05 Jun 2024 15:28:20 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Magnus Karlsson <magnus.karlsson@...il.com>, 
 Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: magnus.karlsson@...el.com, 
 bjorn@...nel.org, 
 ast@...nel.org, 
 daniel@...earbox.net, 
 netdev@...r.kernel.org, 
 maciej.fijalkowski@...el.com, 
 bpf@...r.kernel.org, 
 YuvalE@...ware.com
Subject: Re: [PATCH bpf 0/2] Revert "xsk: support redirect to any socket bound
 to the same umem"

Magnus Karlsson wrote:
> On Wed, 5 Jun 2024 at 01:03, Willem de Bruijn
> <willemdebruijn.kernel@...il.com> wrote:
> >
> > Magnus Karlsson wrote:
> > > Revert "xsk: support redirect to any socket bound to the same umem"
> > >
> > > This patch introduced a potential kernel crash when multiple napi
> > > instances redirect to the same AF_XDP socket. By removing the
> > > queue_index check, it is possible for multiple napi instances to
> > > access the Rx ring at the same time, which will result in a corrupted
> > > ring state which can lead to a crash when flushing the rings in
> > > __xsk_flush(). This can happen when the linked list of sockets to
> > > flush gets corrupted by concurrent accesses. A quick and small fix is
> > > unfortunately not possible, so let us revert this for now.
> >
> > This is a very useful feature, to be able to use AF_XDP sockets with
> > a standard RSS nic configuration.
> 
> I completely agree.
> 
> > Not all AF_XDP use cases require the absolute highest packet rate.
> >
> > Can this be addressed with an optional spinlock on the RxQ, only for
> > this case?
> 
> Yes, or with a MPSC ring implementation.
> 
> > If there is no simple enough fix in the short term, do you plan to
> > reintroduce this in another form later?
> 
> Yuval and I are looking into a solution based around an optional
> spinlock since it is easier to pull off than an MPSC ring. The
> discussion is on-going on the xdp-newbies list [0], but as soon as we
> have a first patch, we will post it here for review and debate.
> 
> [0] https://lore.kernel.org/xdp-newbies/8100DBDC-0B7C-49DB-9995-6027F6E63147@radware.com/

Glad to hear that it's intended to be supported, and even being worked
on, thanks! I'll follow the conversation there.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ