[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZvqQOpqnK9hBmXNn@lore-desk>
Date: Mon, 30 Sep 2024 13:49:14 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Arthur Fabre <afabre@...udflare.com>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Jakub Sitnicki <jakub@...udflare.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>,
bpf@...r.kernel.org, netdev@...r.kernel.org, ast@...nel.org,
daniel@...earbox.net, davem@...emloft.net, kuba@...nel.org,
john.fastabend@...il.com, edumazet@...gle.com, pabeni@...hat.com,
sdf@...ichev.me, tariqt@...dia.com, saeedm@...dia.com,
anthony.l.nguyen@...el.com, przemyslaw.kitszel@...el.com,
intel-wired-lan@...ts.osuosl.org, mst@...hat.com,
jasowang@...hat.com, mcoquelin.stm32@...il.com,
alexandre.torgue@...s.st.com,
kernel-team <kernel-team@...udflare.com>,
Yan Zhai <yan@...udflare.com>
Subject: Re: [RFC bpf-next 0/4] Add XDP rx hw hints support performing
XDP_REDIRECT
> Lorenzo Bianconi <lorenzo@...nel.org> writes:
>
> >> > We could combine such a registration API with your header format, so
> >> > that the registration just becomes a way of allocating one of the keys
> >> > from 0-63 (and the registry just becomes a global copy of the header).
> >> > This would basically amount to moving the "service config file" into the
> >> > kernel, since that seems to be the only common denominator we can rely
> >> > on between BPF applications (as all attempts to write a common daemon
> >> > for BPF management have shown).
> >>
> >> That sounds reasonable. And I guess we'd have set() check the global
> >> registry to enforce that the key has been registered beforehand?
> >>
> >> >
> >> > -Toke
> >>
> >> Thanks for all the feedback!
> >
> > I like this 'fast' KV approach but I guess we should really evaluate its
> > impact on performances (especially for xdp) since, based on the kfunc calls
> > order in the ebpf program, we can have one or multiple memmove/memcpy for
> > each packet, right?
>
> Yes, with Arthur's scheme, performance will be ordering dependent. Using
> a global registry for offsets would sidestep this, but have the
> synchronisation issues we discussed up-thread. So on balance, I think
> the memmove() suggestion will probably lead to the least pain.
>
> For the HW metadata we could sidestep this by always having a fixed
> struct for it (but using the same set/get() API with reserved keys). The
> only drawback of doing that is that we statically reserve a bit of
> space, but I'm not sure that is such a big issue in practice (at least
> not until this becomes to popular that the space starts to be contended;
> but surely 256 bytes ought to be enough for everybody, right? :)).
I am fine with the proposed approach, but I think we need to verify what is the
impact on performances (in the worst case??)
>
> > Moreover, I still think the metadata area in the xdp_frame/xdp_buff is not
> > so suitable for nic hw metadata since:
> > - it grows backward
> > - it is probably in a different cacheline with respect to xdp_frame
> > - nic hw metadata will not start at fixed and immutable address, but it depends
> > on the running ebpf program
> >
> > What about having something like:
> > - fixed hw nic metadata: just after xdp_frame struct (or if you want at the end
> > of the metadata area :)). Here he can reuse the same KV approach if it is fast
> > - user defined metadata: in the metadata area of the xdp_frame/xdp_buff
>
> AFAIU, none of this will live in the (current) XDP metadata area. It
> will all live just after the xdp_frame struct (so sharing the space with
> the metadata area in the sense that adding more metadata kv fields will
> decrease the amount of space that is usable by the current XDP metadata
> APIs).
>
> -Toke
>
ah, ok. I was thinking the proposed approach was to put them in the current
metadata field.
Regards,
Lorenzo
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists