lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 08 Dec 2018 12:21:10 -0800 (PST)
From:   David Miller <davem@...emloft.net>
To:     ilias.apalodimas@...aro.org
Cc:     eric.dumazet@...il.com, fw@...len.de, brouer@...hat.com,
        netdev@...r.kernel.org, toke@...e.dk, ard.biesheuvel@...aro.org,
        jasowang@...hat.com, bjorn.topel@...el.com, w@....eu,
        saeedm@...lanox.com, mykyta.iziumtsev@...il.com,
        borkmann@...earbox.net, alexei.starovoitov@...il.com,
        tariqt@...lanox.com
Subject: Re: [net-next, RFC, 4/8] net: core: add recycle capabilities on
 skbs via page_pool API

From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Date: Sat, 8 Dec 2018 16:57:28 +0200

> The patchset speeds up the mvneta driver on the default network
> stack. The only change that was needed was to adapt the driver to
> using the page_pool API. The speed improvements we are seeing on
> specific workloads (i.e 256b < packet < 400b) are almost 3x.
> 
> Lots of high speed drivers are doing similar recycling tricks themselves (and
> there's no common code, everyone is doing something similar though). All we are
> trying to do is provide a unified API to make that easier for the rest. Another
> advantage is that if the some drivers switch to the API, adding XDP
> functionality on them is pretty trivial.

Yeah this is a very important point moving forward.

Jesse Brandeberg brought the following up to me at LPC and I'd like to
develop it further.

Right now we tell driver authors to write a new driver as SKB based,
and once they've done all of that work we tell them to basically
shoe-horn XDP support into that somewhat different framework.

Instead, the model should be the other way around, because with a raw
meta-data free set of data buffers we can always construct an SKB or
pass it to XDP.

So drivers should be targetting some raw data buffer kind of interface
which takes care of all of this stuff.  If the buffers get wrapped
into an SKB and get pushed into the traditional networking stack, the
driver shouldn't know or care.  Likewise if it ends up being processed
with XDP, it should not need to know or care.

All of those details should be behind a common layer.  Then we can
control:

1) Buffer handling, recycling, "fast paths"

2) Statistics

3) XDP feature sets

We can consolidate behavior and semantics across all of the drivers
if we do this.  No more talk about "supporting all XDP features",
and the inconsistencies we have because of that.

The whole common statistics discussion could be resolved with this
common layer as well.

We'd be able to control and properly optimize everything.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ