[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQLCRrPtQMPBuYiKv44SLDiYwz69KZ=0e0HxJdPQz4x2HQ@mail.gmail.com>
Date: Tue, 11 Jul 2023 19:50:04 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Stanislav Fomichev <sdf@...gle.com>
Cc: bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, Jakub Kicinski <kuba@...nel.org>,
Toke Høiland-Jørgensen <toke@...nel.org>,
Willem de Bruijn <willemb@...gle.com>, David Ahern <dsahern@...nel.org>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>, Björn Töpel <bjorn@...nel.org>,
"Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>, Jesper Dangaard Brouer <hawk@...nel.org>,
Network Development <netdev@...r.kernel.org>, xdp-hints@...-project.net
Subject: Re: [RFC bpf-next v3 09/14] net/mlx5e: Implement devtx kfuncs
On Tue, Jul 11, 2023 at 5:15 PM Stanislav Fomichev <sdf@...gle.com> wrote:
>
> On Tue, Jul 11, 2023 at 4:45 PM Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> >
> > On Tue, Jul 11, 2023 at 4:25 PM Stanislav Fomichev <sdf@...gle.com> wrote:
> > >
> > > On Tue, Jul 11, 2023 at 3:57 PM Alexei Starovoitov
> > > <alexei.starovoitov@...il.com> wrote:
> > > >
> > > > On Fri, Jul 07, 2023 at 12:30:01PM -0700, Stanislav Fomichev wrote:
> > > > > +
> > > > > +static int mlx5e_devtx_request_l4_checksum(const struct devtx_ctx *_ctx,
> > > > > + u16 csum_start, u16 csum_offset)
> > > > > +{
> > > > > + const struct mlx5e_devtx_ctx *ctx = (void *)_ctx;
> > > > > + struct mlx5_wqe_eth_seg *eseg;
> > > > > +
> > > > > + if (unlikely(!ctx->wqe))
> > > > > + return -ENODATA;
> > > > > +
> > > > > + eseg = &ctx->wqe->eth;
> > > > > +
> > > > > + switch (csum_offset) {
> > > > > + case sizeof(struct ethhdr) + sizeof(struct iphdr) + offsetof(struct udphdr, check):
> > > > > + case sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + offsetof(struct udphdr, check):
> > > > > + /* Looks like HW/FW is doing parsing, so offsets are largely ignored. */
> > > > > + eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM;
> > > > > + break;
> > > > > + default:
> > > > > + return -EINVAL;
> > > > > + }
> > > >
> > > > I think this proves my point: csum is not generalizable even across veth and mlx5.
> > > > Above is a square peg that tries to fit csum_start/offset api (that makes sense from SW pov)
> > > > into HW that has different ideas about csum-ing.
> > > >
> > > > Here is what mlx5 does:
> > > > mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
> > > > struct mlx5e_accel_tx_state *accel,
> > > > struct mlx5_wqe_eth_seg *eseg)
> > > > {
> > > > if (unlikely(mlx5e_ipsec_txwqe_build_eseg_csum(sq, skb, eseg)))
> > > > return;
> > > >
> > > > if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
> > > > eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
> > > > if (skb->encapsulation) {
> > > > eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM |
> > > > MLX5_ETH_WQE_L4_INNER_CSUM;
> > > > sq->stats->csum_partial_inner++;
> > > > } else {
> > > > eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
> > > > sq->stats->csum_partial++;
> > > > }
> > > >
> > > > How would you generalize that into csum api that will work across NICs ?
> > > >
> > > > My answer stands: you cannot.
> > > >
> > > > My proposal again:
> > > > add driver specifc kfuncs and hooks for things like csum.
> > >
> > > I do see your point, but to also give you my perspective: I have no
> > > clue what those _CSUM tx bits do (as a non-mlx employee). And what
> > > kind of packets they support (initial patch doesn't give any info).
> > > We can definitely expose mlx5 specific request_l4_checksum(bool encap)
> > > which does things similar to mlx5e_txwqe_build_eseg_csum, but then,
> > > what does it _actually_ do? It obviously can't checksum arbitrary
> > > packet formats (because it has this inner/outer selection bit), so
> > > there is really no way for me to provide a per-driver kfunc api. Maybe
> > > the vendors can?
> > >
> > > So having csum_start/csum_offset abstraction which works with fixed
> > > offsets seems like at least it correctly sets the expectation for BPF
> > > program writers.
> > > The vendors are already supposed to conform to this start/offset API for skb.
> > >
> > > But back to your point: should we maybe try to meet somewhere in the middle?
> > > 1. We try to provide "generic" offload kfuncs; for mlx5, we'll have
> > > this mlx5e_devtx_request_l4_checksum which works for fixed offsets
> >
> > But it doesn't.
> > Even if you add a check for csum_start (that's missing in the patch)
> > there need to be a way to somehow figure out
> > whether skb->encapsulation is true and set appropriate flags.
> > Otherwise this request csum will do "something" that only the HW vendor knows.
> > That would be even harder to debug for bpf prog writers.
> >
> > So instead of helping bpf prog devs it will actively hurt them.
>
> Can we make it more robust? The device can look at the payload (via
> descriptors or extra payload pointer via devtx_ctx) and verify
> h_proto/nexthdr.
> It won't be perfect, I agree, but we can get it working for the common
> cases (and have device-specific kfuncs for the rest).
More robust with more checks ?
That will slow things down and the main advantage of XDP vs skb
layer will be lost.
It's best to stay at skb then when csum and timestamp is available.
> > Another example. If bpf prog was developed and tested on veth
> > it will work for some values of csum_offset on real HW and will -EINVAL
> > for the other values.
> > Just horrible user experience comparing to the case where
> > the user knows that each netdev is potentially different and
> > _has_ to develop and test their prog on the given HW NIC and
> > not assume that the kernel can "do the right thing".
>
> For this, I was actually thinking that we need to provide some
> SW-based fallback mechanism.
> Because if I have a program and a nic that doesn't have an offload
> implemented at all, having a fallback might be useful:
>
> if (bpf_devtx_request_l4_csum(...)) {
> /* oops, hw bailed on us, fallback to sw and expose a counter */
> bpf_devtx_l4_csum_slowpath(csum_start, csum_offset, data, len);
> pkt_sw_csum++;
> }
>
> This is probably needed regardless of which way we do it?
sw fallback? We already have 'generic XDP' and people misuse it
thinking it's a layer they should be using.
It's a nice feeling to say that my XDP prog was developed
and tested on mlx5, but I can move it to a different server
with brand new NIC that doesn't support XDP yet and my prog will
still work because of "generic XDP".
I think such devs are playing with fire and will be burned
when "generic XDP" NIC will be DDoSed.
Same thing here. If we do HW offload of csum it's better be in HW.
Devs have to be 100% certain that HW is offloading it.
>
> Regarding veth vs non-veth: we already have similar issues with
> generic xdp vs non-generic.
> I'm not sure we can completely avoid having surprises when switching
> from sw to hw paths.
> It's whether the users will have to debug 10-20% of their program or
> they'd have to start completely from scratch for every nic.
If rewrite for a nic is not acceptable then they should be using skb layer.
>
> > This csum exercise is clear example that kernel is not in a position
> > to do so.
> > For timestamp it's arguable, but for csum there is no generic api that
> > kernel can apply universally to NICs.
>
> Sure, I agree, it's a mix of both. For some offloads, we can have
> something common, for some we can't.
> But I'm not sure why we have to pick one or another. We can try to
> have common apis (maybe not ideal, yes) and we can expose vendor
> specific ones if there is need.
> If the generic ones get unused - we kill them in the future. If none
> of the vendors comes up with non-generic ones - the generic ones are
> good enough.
>
> I'm assuming you favor non-generic ones because it's easier to implement?
Not only.
yes, it's easier to implement, but the expectations are also clear.
The kernel won't be trying to fall back to the slow path.
XDP prog will tell HW 'do csum' and HW will do it.
For generality we have an skb layer.
Powered by blists - more mailing lists