lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181208145728.GA10660@apalos>
Date:   Sat, 8 Dec 2018 16:57:28 +0200
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     Florian Westphal <fw@...len.de>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
        Toke Høiland-Jørgensen <toke@...e.dk>,
        ard.biesheuvel@...aro.org, Jason Wang <jasowang@...hat.com>,
        BjörnTöpel <bjorn.topel@...el.com>,
        w@....eu, Saeed Mahameed <saeedm@...lanox.com>,
        mykyta.iziumtsev@...il.com,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [net-next, RFC, 4/8] net: core: add recycle capabilities on skbs
 via page_pool API

Hi Eric, 
> >> This patch is changing struct sk_buff, and is thus per-definition
> >> controversial.
> >>
> >> Place a new member 'mem_info' of type struct xdp_mem_info, just after
> >> members (flags) head_frag and pfmemalloc, And not in between
> >> headers_start/end to ensure skb_copy() and pskb_copy() work as-is.
> >> Copying mem_info during skb_clone() is required.  This makes sure that
> >> pages are correctly freed or recycled during the altered
> >> skb_free_head() invocation.
> > 
> > I read this to mean that this 'info' isn't accessed/needed until skb
> > is freed.  Any reason its not added at the end?
> > 
> > This would avoid moving other fields that are probably accessed
> > more frequently during processing.
> > 
> 
> But I do not get why the patch is needed.
> 
> Adding extra cost for each skb destruction is costly.
> 
> I though XDP was all about _not_ having skbs.

You hit the only point i don't personally like in the code, xdp info in the 
skb. Considering the benefits i am convinced this is ok and here's why:

> Please let's do not slow down the non XDP stack only to make XDP more appealing.

We are not trying to do that, on the contrary. The patchset has nothing towards
speeding XDP and slowing down anything else. The patchset speeds up the 
mvneta driver on the default network stack. The only change that was needed was
to adapt the driver to using the page_pool API. The speed improvements we are
seeing on specific workloads (i.e 256b < packet < 400b) are almost 3x.

Lots of high speed drivers are doing similar recycling tricks themselves (and
there's no common code, everyone is doing something similar though). All we are
trying to do is provide a unified API to make that easier for the rest. Another
advantage is that if the some drivers switch to the API, adding XDP
functionality on them is pretty trivial.

Moreover our tests are only performed on systems without or with SMMU disabled.
On a platform i have access to, enabling and disabling the SMMU has some
performance impact. By keeping the buffers mapped we believe this impact
will be substantially less (i'll come back with results once i have them on
this).

I do understand your point, but the potential advantages on my head
overwight that by a lot.

I got other concerns on the patchset though. Like how much memory is it 'ok' to
keep mapped keeping in mind we are using the streaming DMA API. Are we going to
affect anyone else negatively by doing so ?

Thanks
/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ