[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5f7f247acf860_2007208c9@john-XPS-13-9370.notmuch>
Date: Thu, 08 Oct 2020 07:38:50 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Cc: John Fastabend <john.fastabend@...il.com>,
Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org,
netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
ast@...nel.org, daniel@...earbox.net, shayagr@...zon.com,
sameehj@...zon.com, dsahern@...nel.org,
Eelco Chaudron <echaudro@...hat.com>,
Tirthendu Sarkar <tirtha@...il.com>,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: Re: [PATCH v4 bpf-next 00/13] mvneta: introduce XDP multi-buffer
support
Lorenzo Bianconi wrote:
> > On Mon, 05 Oct 2020 21:29:36 -0700
> > John Fastabend <john.fastabend@...il.com> wrote:
> >
> > > Lorenzo Bianconi wrote:
> > > > [...]
> > > >
> > > > >
> > > > > In general I see no reason to populate these fields before the XDP
> > > > > program runs. Someone needs to convince me why having frags info before
> > > > > program runs is useful. In general headers should be preserved and first
> > > > > frag already included in the data pointers. If users start parsing further
> > > > > they might need it, but this series doesn't provide a way to do that
> > > > > so IMO without those helpers its a bit difficult to debate.
> > > >
> > > > We need to populate the skb_shared_info before running the xdp program in order to
> > > > allow the ebpf sanbox to access this data. If we restrict the access to the first
> > > > buffer only I guess we can avoid to do that but I think there is a value allowing
> > > > the xdp program to access this data.
> > >
> > > I agree. We could also only populate the fields if the program accesses
> > > the fields.
> >
> > Notice, a driver will not initialize/use the shared_info area unless
> > there are more segments. And (we have already established) the xdp->mb
> > bit is guarding BPF-prog from accessing shared_info area.
> >
> > > > A possible optimization can be access the shared_info only once before running
> > > > the ebpf program constructing the shared_info using a struct allocated on the
> > > > stack.
> > >
> > > Seems interesting, might be a good idea.
> >
> > It *might* be a good idea ("alloc" shared_info on stack), but we should
> > benchmark this. The prefetch trick might be fast enough. But also
> > keep in mind the performance target, as with large size frames the
> > packet-per-sec we need to handle dramatically drop.
>
> right. I guess we need to define a workload we want to run for the
> xdp multi-buff use-case (e.g. if MTU is 9K we will have ~3 frames
> for each packets and # of pps will be much slower)
Right. Or configuring header split which would give 2 buffers with a much
smaller packet size. This would give some indication of the overhead. Then
we would likely want to look at XDP_TX and XDP_REDIRECT cases. At least
those would be my use cases.
>
> >
> >
>
> [...]
>
> >
> > I do think it makes sense to drop the helpers for now, and focus on how
> > this new multi-buffer frame type is handled in the existing code, and do
> > some benchmarking on higher speed NIC, before the BPF-helper start to
> > lockdown/restrict what we can change/revert as they define UAPI.
>
> ack, I will drop them in v5.
>
> Regards,
> Lorenzo
>
> >
> > E.g. existing code that need to handle this is existing helper
> > bpf_xdp_adjust_tail, which is something I have broad up before and even
> > described in[1]. Lets make sure existing code works with proposed
> > design, before introducing new helpers (and this makes it easier to
> > revert).
> >
> > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
> > --
> > Best regards,
> > Jesper Dangaard Brouer
> > MSc.CS, Principal Kernel Engineer at Red Hat
> > LinkedIn: http://www.linkedin.com/in/brouer
> >
Powered by blists - more mailing lists