[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181208144553.03b059c6@redhat.com>
Date: Sat, 8 Dec 2018 14:45:53 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Florian Westphal <fw@...len.de>, netdev@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Toke Høiland-Jørgensen
<toke@...e.dk>, ard.biesheuvel@...aro.org,
Jason Wang <jasowang@...hat.com>, ilias.apalodimas@...aro.org,
BjörnTöpel <bjorn.topel@...el.com>,
w@....eu, Saeed Mahameed <saeedm@...lanox.com>,
mykyta.iziumtsev@...il.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>, brouer@...hat.com
Subject: Re: [net-next, RFC, 4/8] net: core: add recycle capabilities on
skbs via page_pool API
On Sat, 8 Dec 2018 04:29:17 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:
> On 12/08/2018 01:57 AM, Florian Westphal wrote:
> > Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> >> From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> >>
> >> This patch is changing struct sk_buff, and is thus per-definition
> >> controversial.
> >>
> >> Place a new member 'mem_info' of type struct xdp_mem_info, just after
> >> members (flags) head_frag and pfmemalloc, And not in between
> >> headers_start/end to ensure skb_copy() and pskb_copy() work as-is.
> >> Copying mem_info during skb_clone() is required. This makes sure that
> >> pages are correctly freed or recycled during the altered
> >> skb_free_head() invocation.
> >
> > I read this to mean that this 'info' isn't accessed/needed until skb
> > is freed. Any reason its not added at the end?
> >
> > This would avoid moving other fields that are probably accessed
> > more frequently during processing.
> >
>
> But I do not get why the patch is needed.
>
> Adding extra cost for each skb destruction is costly.
>
> I though XDP was all about _not_ having skbs.
>
> Please let's do not slow down the non XDP stack only to make XDP more
> appealing.
In general this work is about better cooperation between XDP and
netstack. The patch is needed because the page_pool is keeping pages
DMA mapped and need a return hook. I doubt that the extra
compare-and-branch will affect your use-case on super-scalar CPUs. We
(Ilias and I) are actually testing this stuff on low-end ARM64 in-order
execution CPUs, which is actually nice as performance effects of our
code changes are not hidden by speculative execution units. I'm
enforcing (and Ilias agrees) that we do benchmark driven development.
I actually invite people to monitor our progress here[1]. So, trust
me, I am as concerned as you about any performance regression, and is
vigilantly measuring this stuff. (This is more than you can say about
a lot of the other stuff that gets accepted on this list).
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists