[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz0m8AAJFddn2LjehXtdeGS0gat7dwOLA_-_ZeOVYjBdxw@mail.gmail.com>
Date: Mon, 19 Apr 2021 08:20:14 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, bpf <bpf@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
lorenzo.bianconi@...hat.com,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, shayagr@...zon.com,
sameehj@...zon.com, John Fastabend <john.fastabend@...il.com>,
David Ahern <dsahern@...nel.org>,
Eelco Chaudron <echaudro@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Alexander Duyck <alexander.duyck@...il.com>,
Saeed Mahameed <saeed@...nel.org>,
"Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>,
Tirthendu <tirthendu.sarkar@...el.com>
Subject: Re: [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support
On Sun, Apr 18, 2021 at 6:18 PM Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
>
> On Fri, 16 Apr 2021 16:27:18 +0200
> Magnus Karlsson <magnus.karlsson@...il.com> wrote:
>
> > On Thu, Apr 8, 2021 at 2:51 PM Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> > >
> > > This series introduce XDP multi-buffer support. The mvneta driver is
> > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
> > > please focus on how these new types of xdp_{buff,frame} packets
> > > traverse the different layers and the layout design. It is on purpose
> > > that BPF-helpers are kept simple, as we don't want to expose the
> > > internal layout to allow later changes.
> > >
> > > For now, to keep the design simple and to maintain performance, the XDP
> > > BPF-prog (still) only have access to the first-buffer. It is left for
> > > later (another patchset) to add payload access across multiple buffers.
> > > This patchset should still allow for these future extensions. The goal
> > > is to lift the XDP MTU restriction that comes with XDP, but maintain
> > > same performance as before.
> [...]
> > >
> > > [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
> > > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> > > [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)
> >
> > Took your patches for a test run with the AF_XDP sample xdpsock on an
> > i40e card and the throughput degradation is between 2 to 6% depending
> > on the setup and microbenchmark within xdpsock that is executed. And
> > this is without sending any multi frame packets. Just single frame
> > ones. Tirtha made changes to the i40e driver to support this new
> > interface so that is being included in the measurements.
>
> Could you please share Tirtha's i40e support patch with me?
We will post them on the list as an RFC. Tirtha also added AF_XDP
multi-frame support on top of Lorenzo's patches so we will send that
one out as well. Will also rerun my experiments, properly document
them and send out just to be sure that I did not make any mistake.
Just note that I would really like for the multi-frame support to get
in. I have lost count on how many people that have asked for it to be
added to XDP and AF_XDP. So please check our implementation and
improve it so we can get the overhead down to where we want it to be.
Thanks: Magnus
> I would like to reproduce these results in my testlab, in-order to
> figure out where the throughput degradation comes from.
>
> > What performance do you see with the mvneta card? How much are we
> > willing to pay for this feature when it is not being used or can we in
> > some way selectively turn it on only when needed?
>
> Well, as Daniel says performance wise we require close to /zero/
> additional overhead, especially as you state this happens when sending
> a single frame, which is a base case that we must not slowdown.
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Powered by blists - more mailing lists