[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <238b1f8b-ba1f-0e8b-4fbb-66ab8639b042@intel.com>
Date: Wed, 14 Jun 2023 16:03:20 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Toke Høiland-Jørgensen <toke@...nel.org>, "Maciej
Fijalkowski" <maciej.fijalkowski@...el.com>
CC: <netdev@...r.kernel.org>, <anthony.l.nguyen@...el.com>,
<intel-wired-lan@...ts.osuosl.org>, <magnus.karlsson@...el.com>,
<fred@...udflare.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next] ice: allow hot-swapping XDP
programs
From: Toke Høiland-Jørgensen <toke@...nel.org>
Date: Wed, 14 Jun 2023 15:47:02 +0200
> Alexander Lobakin <aleksander.lobakin@...el.com> writes:
>
>> From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
>> Date: Wed, 14 Jun 2023 14:50:28 +0200
[...]
>>> Hmm you mean we should do bpf_prog_put() *after* we update bpf_prog on
>>> ice_rx_ring? I think this is a fair point as we don't bump the refcount
>>> per each Rx ring that holds the ptr to bpf_prog, we just rely on the main
>>> one from VSI.
>>
>> Not even after we update it there. I believe we should synchronize NAPI
>> cycles with BPF prog update (have synchronize_rcu() before put or so to
>> make the config path wait until there's no polling and onstack pointers,
>> would that be enough?).
>>
>> NAPI polling starts
>> |<--- XDP prog pointer is placed on the stack and used from there
>> |
>> | <--- here you do xchg() and bpf_prog_put()
>> | <--- here you update XDP progs on the rings
>> |
>> |<--- polling loop is still using the [now invalid] onstack pointer
>> |
>> NAPI polling ends
>
> No, this is fine; bpf_prog_put() uses call_rcu() to actually free the
> program, which guarantees that any ongoing RCU critical sections have
> ended before. And as explained in that other series of mine, this
> includes any ongoing NAPI poll cycles.
Breh, forgot that bpf_prog_put() uses call_rcu() :D Thanks, now
everything is clear to me. I now also feel like updating first the ring
pointers and then the "main" pointer would be enough.
>
> -Toke
Thanks,
Olek
Powered by blists - more mailing lists