[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55542209-03d7-590f-9ab1-bbbf924d033c@redhat.com>
Date: Wed, 5 Oct 2022 16:19:30 +0200
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Stanislav Fomichev <sdf@...gle.com>,
Jakub Kicinski <kuba@...nel.org>
Cc: brouer@...hat.com, Martin KaFai Lau <martin.lau@...ux.dev>,
Jesper Dangaard Brouer <jbrouer@...hat.com>,
bpf@...r.kernel.org, netdev@...r.kernel.org,
xdp-hints@...-project.net, larysa.zaremba@...el.com,
memxor@...il.com, Lorenzo Bianconi <lorenzo@...nel.org>,
mtahhan@...hat.com,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Daniel Borkmann <borkmann@...earbox.net>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
dave@...cker.co.uk, Magnus Karlsson <magnus.karlsson@...el.com>,
bjorn@...nel.org
Subject: Re: [PATCH RFCv2 bpf-next 00/18] XDP-hints: XDP gaining access to HW
offload hints via BTF
On 05/10/2022 03.02, Stanislav Fomichev wrote:
> On Tue, Oct 4, 2022 at 5:59 PM Jakub Kicinski <kuba@...nel.org> wrote:
>>
>> On Tue, 4 Oct 2022 17:25:51 -0700 Martin KaFai Lau wrote:
>>> A intentionally wild question, what does it take for the driver to return the
>>> hints. Is the rx_desc and rx_queue enough? When the xdp prog is calling a
>>> kfunc/bpf-helper, like 'hwtstamp = bpf_xdp_get_hwtstamp()', can the driver
>>> replace it with some inline bpf code (like how the inline code is generated for
>>> the map_lookup helper). The xdp prog can then store the hwstamp in the meta
>>> area in any layout it wants.
>>
>> Since you mentioned it... FWIW that was always my preference rather than
>> the BTF magic :) The jited image would have to be per-driver like we
>> do for BPF offload but that's easy to do from the technical
>> perspective (I doubt many deployments bind the same prog to multiple
>> HW devices)..
On the technical side we do have the ifindex that can be passed along
which is currently used for getting XDP hardware offloading to work.
But last time I tried this, I failed due to BPF tail call maps.
(It's not going to fly for other reasons, see redirect below).
>
> +1, sounds like a good alternative (got your reply while typing)
> I'm not too versed in the rx_desc/rx_queue area, but seems like worst
> case that bpf_xdp_get_hwtstamp can probably receive a xdp_md ctx and
> parse it out from the pre-populated metadata?
>
> Btw, do we also need to think about the redirect case? What happens
> when I redirect one frame from a device A with one metadata format to
> a device B with another?
Exactly the problem. With XDP redirect the "remote" target device also
need to interpret this metadata layout. For RX-side we have the
immediate case with redirecting into a veth device. For future TX-side
this is likely the same kind of issue, but I hope if we can solve this
for veth redirect use-case, this will keep us future proof.
For veth use-case I hope that we can use same trick as
bpf_core_field_exists() to do dead-code elimination based on if a device
driver is loaded on the system like this pseudo code:
if (bpf_core_type_id_kernel(struct xdp_hints_i40e_timestamp)) {
/* check id + extract timestamp */
}
if (bpf_core_type_id_kernel(struct xdp_hints_ixgbe_timestamp)) {
/* check id + extract timestamp */
}
If the given device drives doesn't exist on the system, I assume
bpf_core_type_id_kernel() will return 0 at libbpf relocation/load-time,
and thus this should cause dead-code elimination. Should work today AFAIK?
--Jesper
Powered by blists - more mailing lists