[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALzJLG9VZzeY=GNWczr59dydTuDxZt0YbR7=-3Y2KT4EGXOTdg@mail.gmail.com>
Date: Mon, 10 Dec 2018 01:51:51 -0800
From: Saeed Mahameed <saeedm@....mellanox.co.il>
To: "David S. Miller" <davem@...emloft.net>
Cc: ilias.apalodimas@...aro.org, Eric Dumazet <eric.dumazet@...il.com>,
fw@...len.de, Jesper Dangaard Brouer <brouer@...hat.com>,
Linux Netdev List <netdev@...r.kernel.org>, toke@...e.dk,
ard.biesheuvel@...aro.org, jasowang@...hat.com,
bjorn.topel@...el.com, w@....eu,
Saeed Mahameed <saeedm@...lanox.com>,
mykyta.iziumtsev@...il.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [net-next, RFC, 4/8] net: core: add recycle capabilities on skbs
via page_pool API
On Sat, Dec 8, 2018 at 12:21 PM David Miller <davem@...emloft.net> wrote:
>
> From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> Date: Sat, 8 Dec 2018 16:57:28 +0200
>
> > The patchset speeds up the mvneta driver on the default network
> > stack. The only change that was needed was to adapt the driver to
> > using the page_pool API. The speed improvements we are seeing on
> > specific workloads (i.e 256b < packet < 400b) are almost 3x.
> >
> > Lots of high speed drivers are doing similar recycling tricks themselves (and
> > there's no common code, everyone is doing something similar though). All we are
> > trying to do is provide a unified API to make that easier for the rest. Another
> > advantage is that if the some drivers switch to the API, adding XDP
> > functionality on them is pretty trivial.
>
> Yeah this is a very important point moving forward.
>
> Jesse Brandeberg brought the following up to me at LPC and I'd like to
> develop it further.
>
> Right now we tell driver authors to write a new driver as SKB based,
> and once they've done all of that work we tell them to basically
> shoe-horn XDP support into that somewhat different framework.
>
> Instead, the model should be the other way around, because with a raw
> meta-data free set of data buffers we can always construct an SKB or
> pass it to XDP.
>
Yep, one concern is how to integrate the BTF approach to let the stack
translate hw specific
meta data from that raw buffer to populate stack generated skbs, we
will need a middle format
that both driver and the stack can understand, userspace xdp programs
will get the format from the driver
and then compile itself with it, we can't do this in kernel!
> So drivers should be targetting some raw data buffer kind of interface
> which takes care of all of this stuff. If the buffers get wrapped
> into an SKB and get pushed into the traditional networking stack, the
> driver shouldn't know or care. Likewise if it ends up being processed
> with XDP, it should not need to know or care.
>
We need XDP path exclusive features, like this patch page pool, to
have an advantage over the conventional way.
Otherwise it is always faster and more appealing to build skbs in driver level.
We also need to consider legacy hardware that might not be able to
fully support such raw buffers (for example ability to preserve
headrooms per packet).
> All of those details should be behind a common layer. Then we can
> control:
>
> 1) Buffer handling, recycling, "fast paths"
>
> 2) Statistics
>
> 3) XDP feature sets
>
> We can consolidate behavior and semantics across all of the drivers
> if we do this. No more talk about "supporting all XDP features",
> and the inconsistencies we have because of that.
>
> The whole common statistics discussion could be resolved with this
> common layer as well.
>
> We'd be able to control and properly optimize everything.
Powered by blists - more mailing lists