[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wnty7teq.fsf@toke.dk>
Date: Mon, 22 Mar 2021 22:47:09 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
bpf@...r.kernel.org, netdev@...r.kernel.org, daniel@...earbox.net,
ast@...nel.org
Cc: bjorn.topel@...el.com, magnus.karlsson@...el.com,
ciara.loftus@...el.com, john.fastabend@...il.com,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Subject: Re: [PATCH v3 bpf-next 06/17] libbpf: xsk: use bpf_link
Maciej Fijalkowski <maciej.fijalkowski@...el.com> writes:
> Currently, if there are multiple xdpsock instances running on a single
> interface and in case one of the instances is terminated, the rest of
> them are left in an inoperable state due to the fact of unloaded XDP
> prog from interface.
>
> Consider the scenario below:
>
> // load xdp prog and xskmap and add entry to xskmap at idx 10
> $ sudo ./xdpsock -i ens801f0 -t -q 10
>
> // add entry to xskmap at idx 11
> $ sudo ./xdpsock -i ens801f0 -t -q 11
>
> terminate one of the processes and another one is unable to work due to
> the fact that the XDP prog was unloaded from interface.
>
> To address that, step away from setting bpf prog in favour of bpf_link.
> This means that refcounting of BPF resources will be done automatically
> by bpf_link itself.
>
> Provide backward compatibility by checking if underlying system is
> bpf_link capable. Do this by looking up/creating bpf_link on loopback
> device. If it failed in any way, stick with netlink-based XDP prog.
> Otherwise, use bpf_link-based logic.
So how is the caller supposed to know which of the cases happened?
Presumably they need to do their own cleanup in that case? AFAICT you're
changing the code to always clobber the existing XDP program on detach
in the fallback case, which seems like a bit of an aggressive change? :)
-Toke
Powered by blists - more mailing lists