lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 14 Jun 2023 14:50:28 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
CC: Toke Høiland-Jørgensen <toke@...nel.org>,
	<netdev@...r.kernel.org>, <anthony.l.nguyen@...el.com>,
	<intel-wired-lan@...ts.osuosl.org>, <magnus.karlsson@...el.com>,
	<fred@...udflare.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next] ice: allow hot-swapping XDP
 programs

On Wed, Jun 14, 2023 at 02:40:07PM +0200, Alexander Lobakin wrote:
> From: Toke Høiland-Jørgensen <toke@...nel.org>
> Date: Tue, 13 Jun 2023 19:59:37 +0200
> 
> > Maciej Fijalkowski <maciej.fijalkowski@...el.com> writes:
> > 
> >> On Tue, Jun 13, 2023 at 05:15:15PM +0200, Alexander Lobakin wrote:
> >>> From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> >>> Date: Tue, 13 Jun 2023 17:10:05 +0200
> 
> [...]
> 
> >> Since we removed rcu sections from driver sides and given an assumption
> >> that local_bh_{dis,en}able() pair serves this purpose now i believe this
> >> is safe. Are you aware of:
> >>
> >> https://lore.kernel.org/bpf/20210624160609.292325-1-toke@redhat.com/
> 
> Why [0] then? Added in [1] precisely for the sake of safe XDP prog
> access and wasn't removed :s I was relying on that one in my suggestions
> and code :D
> 
> > 
> > As the author of that series, I agree that it's not necessary to add
> > additional RCU protection. ice_vsi_assign_bpf_prog() already uses xchg()
> > and WRITE_ONCE() which should protect against tearing, and the xdp_prog
> > pointer being passed to ice_run_xdp() is a copy residing on the stack,
> > so it will only be read once per NAPI cycle anyway (which is in line
> > with how most other drivers do it).
> 
> What if a NAPI polling cycle is being run on one core while at the very
> same moment I'm replacing the XDP prog on another core? Not in terms of
> pointer tearing, I see now that this is handled correctly, but in terms
> of refcounts? Can't bpf_prog_put() free it while the polling is still
> active?

Hmm you mean we should do bpf_prog_put() *after* we update bpf_prog on
ice_rx_ring? I think this is a fair point as we don't bump the refcount
per each Rx ring that holds the ptr to bpf_prog, we just rely on the main
one from VSI.

> 
> > 
> > It *would* be nice to add an __rcu annotation to ice_vsi->xdp_prog and
> > ice_rx_ring->xdp_prog (and move to using rcu_dereference(),
> > rcu_assign_pointer() etc), but this is more a documentation/static
> > checker thing than it's a "correctness of the generated code" thing :)

Agree but I would rather address the rest of Intel drivers in the series.

> > 
> > -Toke
> 
> [0]
> https://elixir.bootlin.com/linux/v6.4-rc6/source/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c#L141
> [1]
> https://github.com/alobakin/linux/commit/9c25a22dfb00270372224721fed646965420323a
> 
> Thanks,
> Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ