lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sfaujgvd.fsf@toke.dk>
Date: Wed, 14 Jun 2023 15:47:02 +0200
From: Toke Høiland-Jørgensen <toke@...nel.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>, Maciej Fijalkowski
 <maciej.fijalkowski@...el.com>
Cc: netdev@...r.kernel.org, anthony.l.nguyen@...el.com,
 intel-wired-lan@...ts.osuosl.org, magnus.karlsson@...el.com,
 fred@...udflare.com
Subject: Re: [Intel-wired-lan] [PATCH iwl-next] ice: allow hot-swapping XDP
 programs

Alexander Lobakin <aleksander.lobakin@...el.com> writes:

> From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> Date: Wed, 14 Jun 2023 14:50:28 +0200
>
>> On Wed, Jun 14, 2023 at 02:40:07PM +0200, Alexander Lobakin wrote:
>>> From: Toke Høiland-Jørgensen <toke@...nel.org>
>>> Date: Tue, 13 Jun 2023 19:59:37 +0200
>
> [...]
>
>>> What if a NAPI polling cycle is being run on one core while at the very
>>> same moment I'm replacing the XDP prog on another core? Not in terms of
>>> pointer tearing, I see now that this is handled correctly, but in terms
>>> of refcounts? Can't bpf_prog_put() free it while the polling is still
>>> active?
>> 
>> Hmm you mean we should do bpf_prog_put() *after* we update bpf_prog on
>> ice_rx_ring? I think this is a fair point as we don't bump the refcount
>> per each Rx ring that holds the ptr to bpf_prog, we just rely on the main
>> one from VSI.
>
> Not even after we update it there. I believe we should synchronize NAPI
> cycles with BPF prog update (have synchronize_rcu() before put or so to
> make the config path wait until there's no polling and onstack pointers,
> would that be enough?).
>
> NAPI polling starts
> |<--- XDP prog pointer is placed on the stack and used from there
> |
> |  <--- here you do xchg() and bpf_prog_put()
> |  <--- here you update XDP progs on the rings
> |
> |<--- polling loop is still using the [now invalid] onstack pointer
> |
> NAPI polling ends

No, this is fine; bpf_prog_put() uses call_rcu() to actually free the
program, which guarantees that any ongoing RCU critical sections have
ended before. And as explained in that other series of mine, this
includes any ongoing NAPI poll cycles.

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ