[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0yaxw5s.fsf@toke.dk>
Date: Thu, 10 Nov 2022 23:49:19 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: John Fastabend <john.fastabend@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Stanislav Fomichev <sdf@...gle.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
song@...nel.org, yhs@...com, john.fastabend@...il.com,
kpsingh@...nel.org, haoluo@...gle.com, jolsa@...nel.org,
David Ahern <dsahern@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Willem de Bruijn <willemb@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Anatoly Burakov <anatoly.burakov@...el.com>,
Alexander Lobakin <alexandr.lobakin@...el.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
Maryam Tahhan <mtahhan@...hat.com>, xdp-hints@...-project.net,
netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [xdp-hints] Re: [RFC bpf-next v2 06/14] xdp: Carry over xdp
metadata into skb context
John Fastabend <john.fastabend@...il.com> writes:
> Toke Høiland-Jørgensen wrote:
>> John Fastabend <john.fastabend@...il.com> writes:
>>
>> > Toke Høiland-Jørgensen wrote:
>> >> Snipping a bit of context to reply to this bit:
>> >>
>> >> >>>> Can the xdp prog still change the metadata through xdp->data_meta? tbh, I am not
>> >> >>>> sure it is solid enough by asking the xdp prog not to use the same random number
>> >> >>>> in its own metadata + not to change the metadata through xdp->data_meta after
>> >> >>>> calling bpf_xdp_metadata_export_to_skb().
>> >> >>>
>> >> >>> What do you think the usecase here might be? Or are you suggesting we
>> >> >>> reject further access to data_meta after
>> >> >>> bpf_xdp_metadata_export_to_skb somehow?
>> >> >>>
>> >> >>> If we want to let the programs override some of this
>> >> >>> bpf_xdp_metadata_export_to_skb() metadata, it feels like we can add
>> >> >>> more kfuncs instead of exposing the layout?
>> >> >>>
>> >> >>> bpf_xdp_metadata_export_to_skb(ctx);
>> >> >>> bpf_xdp_metadata_export_skb_hash(ctx, 1234);
>> >>
>> >
>> > Hi Toke,
>> >
>> > Trying not to bifurcate your thread. Can I start a new one here to
>> > elaborate on these use cases. I'm still a bit lost on any use case
>> > for this that makes sense to actually deploy on a network.
>> >
>> >> There are several use cases for needing to access the metadata after
>> >> calling bpf_xdp_metdata_export_to_skb():
>> >>
>> >> - Accessing the metadata after redirect (in a cpumap or devmap program,
>> >> or on a veth device)
>> >
>> > I think for devmap there are still lots of opens how/where the skb
>> > is even built.
>>
>> For veth it's pretty clear; i.e., when redirecting into containers.
>
> Ah but I think XDP on veth is a bit questionable in general. The use
> case is NFV I guess but its not how I would build out NFV. I've never
> seen it actually deployed other than in CI. Anyways not necessary to
> drop into that debate here. It exists so OK.
>
>>
>> > For cpumap I'm a bit unsure what the use case is. For ice, mlx and
>> > such you should use the hardware RSS if performance is top of mind.
>>
>> Hardware RSS works fine if your hardware supports the hashing you want;
>> many do not. As an example, Jesper wrote this application that uses
>> cpumap to divide out ISP customer traffic among different CPUs (solving
>> an HTB scaling problem):
>>
>> https://github.com/xdp-project/xdp-cpumap-tc
>
> I'm going to argue hw should be able to do this still and we
> should fix the hw but maybe not easily doable without convincing
> hardware folks to talk to us.
Sure, in the ideal world the hardware should just be able to do this.
Unfortunately, we don't live in that ideal world :)
> Also not obvious tto me how linked code works without more studying,
> its ingress HTB? So you would push the rxhash and timestamp into
> cpumap and then build the skb here with the correct skb->timestamp?
No, the HTB tree is on egress. The use case is an ISP middlebox that
shapes (say) 1000 customers to their subscribed rate, using a big HTB
tree. If you just do this with a single HTB instance on the egress NIC
you run into the global qdisc lock and you can't scale beyond a pretty
measly bandwidth. Whereas if you use multiple HW TXQs and the mq qdisc,
you can partition the HTB tree so you only have a subset of customers on
each HWQ/HTB instance. But for this to work, and still guarantee each
customer gets shaped to the right rate, you need to ensure that all that
customer's traffic hits the same HWQ. The xdp-cpumap-tc tool does this
by configuring the TXQs to correspond to individual CPUs, and then runs
an XDP program that matches traffic to customers and redirects them to
the right CPU (using an LPM map).
This solution runs in production in quite a number of smallish WISPs,
BTW, with quite nice results. The software to set it all up is also open
sourced: https://libreqos.io/
Coming back to HW metadata, the LibreQoS system could benefit from the
hardware flow hash in particular, since that would save a hash operation
when enqueueing the packet into sch_cake.
> OK even if I can't exactly find the use case for cpumap if I had
> a use case I can see how passing metadata through is useful.
Great!
>> > And then for specific devices on cpumap (maybe realtime or ptp
>> > things?) could we just throw it through the xdp_frame?
>>
>> Not sure what you mean here? Throw what through the xdp_frame?
>
> Doesn't matter reread patches and figured it out I was slightly
> confused.
Right, OK.
>>
>> >> - Transferring the packet+metadata to AF_XDP
>> >
>> > In this case we have the metadata and AF_XDP program and XDP program
>> > simply need to agree on metadata format. No need to have some magic
>> > numbers and driver specific kfuncs.
>>
>> See my other reply to Martin: Yeah, for AF_XDP users that write their
>> own kernel XDP programs, they can just do whatever they want. But many
>> users just rely on the default program in libxdp, so having a standard
>> format to include with that is useful.
>>
>
> I don't think your AF_XDP program is any different than other AF_XDP
> programs. Your lib can create a standard format if it wants but
> kernel doesn't need to enforce it anyway.
Yeah, we totally could. But since we're defining a "standard" format for
kernel (skb) consumption anyway, making this available to AF_XDP is
kinda convenient so we don't have to :)
>> >> - Returning XDP_PASS, but accessing some of the metadata first (whether
>> >> to read or change it)
>> >>
>> >
>> > I don't get this case? XDP_PASS should go to stack normally through
>> > drivers build_skb routines. These will populate timestamp normally.
>> > My guess is simply descriptor->skb load/store is cheaper than carrying
>> > around this metadata and doing the call in BPF side. Anyways you
>> > just built an entire skb and hit the stack I don't think you will
>> > notice this noise in any benchmark.
>>
>> If you modify the packet before calling XDP_PASS you may want to update
>> the metadata as well (for instance the RX hash, or in the future the
>> metadata could also carry transport header offsets).
>
> OK. So when you modify the pkt fixing up the rxhash makes sense. Thanks
> for the response I can see the argument.
Great! You're welcome :)
-Toke
Powered by blists - more mailing lists