[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201006093011.36375745@carbon>
Date: Tue, 6 Oct 2020 09:30:11 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
bpf@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, ast@...nel.org, daniel@...earbox.net,
shayagr@...zon.com, sameehj@...zon.com, dsahern@...nel.org,
Eelco Chaudron <echaudro@...hat.com>, brouer@...hat.com,
Tirthendu Sarkar <tirtha@...il.com>,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: Re: [PATCH v4 bpf-next 00/13] mvneta: introduce XDP multi-buffer
support
On Mon, 05 Oct 2020 21:29:36 -0700
John Fastabend <john.fastabend@...il.com> wrote:
> Lorenzo Bianconi wrote:
> > [...]
> >
> > >
> > > In general I see no reason to populate these fields before the XDP
> > > program runs. Someone needs to convince me why having frags info before
> > > program runs is useful. In general headers should be preserved and first
> > > frag already included in the data pointers. If users start parsing further
> > > they might need it, but this series doesn't provide a way to do that
> > > so IMO without those helpers its a bit difficult to debate.
> >
> > We need to populate the skb_shared_info before running the xdp program in order to
> > allow the ebpf sanbox to access this data. If we restrict the access to the first
> > buffer only I guess we can avoid to do that but I think there is a value allowing
> > the xdp program to access this data.
>
> I agree. We could also only populate the fields if the program accesses
> the fields.
Notice, a driver will not initialize/use the shared_info area unless
there are more segments. And (we have already established) the xdp->mb
bit is guarding BPF-prog from accessing shared_info area.
> > A possible optimization can be access the shared_info only once before running
> > the ebpf program constructing the shared_info using a struct allocated on the
> > stack.
>
> Seems interesting, might be a good idea.
It *might* be a good idea ("alloc" shared_info on stack), but we should
benchmark this. The prefetch trick might be fast enough. But also
keep in mind the performance target, as with large size frames the
packet-per-sec we need to handle dramatically drop.
The TSO statement, I meant LRO (Large Receive Offload), but I want the
ability to XDP-redirect this frame out another netdev as TSO. This
does means that we need more than 3 pages (2 frags slots) to store LRO
frames. Thus, if we store this shared_info on the stack it might need
to be larger than we like.
> > Moreover we can define a "xdp_shared_info" struct to alias the skb_shared_info
> > one in order to have most on frags elements in the first "shared_info" cache line.
> >
> > >
> > > Specifically for XDP_TX case we can just flip the descriptors from RX
> > > ring to TX ring and keep moving along. This is going to be ideal on
> > > 40/100Gbps nics.
I think both approaches will still allow to do these page-flips.
> > > I'm not arguing that its likely possible to put some prefetch logic
> > > in there and keep the pipe full, but I would need to see that on
> > > a 100gbps nic to be convinced the details here are going to work. Or
> > > at minimum a 40gbps nic.
I'm looking forward to see how this performs on faster NICs. Once we
have a high-speed NIC driver with this I can also start doing testing
in my testlab.
> > [...]
> >
> > > Not against it, but these things are a bit tricky. Couple things I still
> > > want to see/understand
> > >
> > > - Lets see a 40gbps use a prefetch and verify it works in practice
> > > - Explain why we can't just do this after XDP program runs
> >
> > how can we allow the ebpf program to access paged data if we do not do that?
>
> I don't see an easy way, but also this series doesn't have the data
> access support.
Eelco (Cc'ed) are working on patches that allow access to data in these
fragments, so far internal patches, which (sorry to mention) got
shutdown in internal review.
> Its hard to tell until we get at least a 40gbps nic if my concern about
> performance is real or not. Prefetching smartly could resolve some of the
> issue I guess.
>
> If the Intel folks are working on it I think waiting would be great. Otherwise
> at minimum drop the helpers and be prepared to revert things if needed.
I do think it makes sense to drop the helpers for now, and focus on how
this new multi-buffer frame type is handled in the existing code, and do
some benchmarking on higher speed NIC, before the BPF-helper start to
lockdown/restrict what we can change/revert as they define UAPI.
E.g. existing code that need to handle this is existing helper
bpf_xdp_adjust_tail, which is something I have broad up before and even
described in[1]. Lets make sure existing code works with proposed
design, before introducing new helpers (and this makes it easier to
revert).
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists