[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YOfv1vHcZPBvyfaN@enceladus>
Date: Fri, 9 Jul 2021 09:42:30 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: Matteo Croce <mcroce@...ux.microsoft.com>,
Marcin Wojtas <mw@...ihalf.com>,
"Russell King (Oracle)" <linux@...linux.org.uk>,
Sven Auhagen <sven.auhagen@...eatech.de>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, linuxarm@...neuler.org,
yisen.zhuang@...wei.com, salil.mehta@...wei.com,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Fenghua Yu <fenghua.yu@...el.com>,
Roman Gushchin <guro@...com>, Peter Xu <peterx@...hat.com>,
feng.tang@...el.com, Jason Gunthorpe <jgg@...pe.ca>,
Matteo Croce <mcroce@...rosoft.com>,
Hugh Dickins <hughd@...gle.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexander Lobakin <alobakin@...me>,
Willem de Bruijn <willemb@...gle.com>,
wenxu <wenxu@...oud.cn>, Cong Wang <cong.wang@...edance.com>,
Kevin Hao <haokexin@...il.com>,
Aleksandr Nogikh <nogikh@...gle.com>,
Marco Elver <elver@...gle.com>, netdev@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
bpf@...r.kernel.org
Subject: Re: [PATCH net-next RFC 0/2] add elevated refcnt support for page
pool
On Fri, Jul 09, 2021 at 02:40:02PM +0800, Yunsheng Lin wrote:
> On 2021/7/9 12:15, Matteo Croce wrote:
> > On Wed, Jul 7, 2021 at 6:50 PM Marcin Wojtas <mw@...ihalf.com> wrote:
> >>
> >> Hi,
> >>
> >>
> >> ??r., 7 lip 2021 o 01:20 Matteo Croce <mcroce@...ux.microsoft.com> napisa??(a):
> >>>
> >>> On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)
> >>> <linux@...linux.org.uk> wrote:
> >>>>
> >>>> On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:
> >>>>> On Wed, 30 Jun 2021 17:17:54 +0800
> >>>>> Yunsheng Lin <linyunsheng@...wei.com> wrote:
> >>>>>
> >>>>>> This patchset adds elevated refcnt support for page pool
> >>>>>> and enable skb's page frag recycling based on page pool
> >>>>>> in hns3 drvier.
> >>>>>>
> >>>>>> Yunsheng Lin (2):
> >>>>>> page_pool: add page recycling support based on elevated refcnt
> >>>>>> net: hns3: support skb's frag page recycling based on page pool
> >>>>>>
> >>>>>> drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 79 +++++++-
> >>>>>> drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 3 +
> >>>>>> drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 1 +
> >>>>>> drivers/net/ethernet/marvell/mvneta.c | 6 +-
> >>>>>> drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
> >>>>>> include/linux/mm_types.h | 2 +-
> >>>>>> include/linux/skbuff.h | 4 +-
> >>>>>> include/net/page_pool.h | 30 ++-
> >>>>>> net/core/page_pool.c | 215
> >>>>>> +++++++++++++++++---- 9 files changed, 285 insertions(+), 57
> >>>>>> deletions(-)
> >>>>>>
> >>>>>
> >>>>> Interesting!
> >>>>> Unfortunately I'll not have access to my macchiatobin anytime soon, can
> >>>>> someone test the impact, if any, on mvpp2?
> >>>>
> >>>> I'll try to test. Please let me know what kind of testing you're
> >>>> looking for (I haven't been following these patches, sorry.)
> >>>>
> >>>
> >>> A drop test or L2 routing will be enough.
> >>> BTW I should have the macchiatobin back on friday.
> >>
> >> I have a 10G packet generator connected to 10G ports of CN913x-DB - I
> >> will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK
> >> this until Monday).
> >>
> >
> > I managed to to a drop test on mvpp2. Maybe there is a slowdown but
> > it's below the measurement uncertainty.
> >
> > Perf top before:
> >
> > Overhead Shared O Symbol
> > 8.48% [kernel] [k] page_pool_put_page
> > 2.57% [kernel] [k] page_pool_refill_alloc_cache
> > 1.58% [kernel] [k] page_pool_alloc_pages
> > 0.75% [kernel] [k] page_pool_return_skb_page
> >
> > after:
> >
> > Overhead Shared O Symbol
> > 8.34% [kernel] [k] page_pool_put_page
> > 4.52% [kernel] [k] page_pool_return_skb_page
> > 4.42% [kernel] [k] page_pool_sub_bias
> > 3.16% [kernel] [k] page_pool_alloc_pages
> > 2.43% [kernel] [k] page_pool_refill_alloc_cache
>
> Hi, Matteo
> Thanks for the testing.
> it seems you have adapted the mvpp2 driver to use the new frag
> API for page pool, There is one missing optimization for XDP case,
> the page is always returned to the pool->ring regardless of the
> context of page_pool_put_page() for elevated refcnt case.
>
> Maybe adding back that optimization will close some gap of the above
> performance difference if the drop is happening in softirq context.
>
I think what Matteo did was a pure netstack test. We'll need testing on
both XDP and normal network cases to be able to figure out the exact
impact.
Thanks
/Ilias
> >
> > Regards,
> >
Powered by blists - more mailing lists