[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191224150058.4400ffab@carbon>
Date: Tue, 24 Dec 2019 15:00:58 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: Matteo Croce <mcroce@...hat.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
Lorenzo Bianconi <lorenzo@...nel.org>,
Maxime Chevallier <maxime.chevallier@...tlin.com>,
Antoine Tenart <antoine.tenart@...tlin.com>,
Luka Perkov <luka.perkov@...tura.hr>,
Tomislav Tomasic <tomislav.tomasic@...tura.hr>,
Marcin Wojtas <mw@...ihalf.com>,
Stefan Chulski <stefanc@...vell.com>,
Nadav Haklai <nadavh@...vell.com>, brouer@...hat.com
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
On Tue, 24 Dec 2019 11:52:29 +0200
Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
> On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > This patches change the memory allocator of mvpp2 from the frag allocator to
> > the page_pool API. This change is needed to add later XDP support to mvpp2.
> >
> > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > more slower. This is the tc drop rate measured with a single flow:
> >
> > stock net-next with frag allocator:
> > rx: 900.7 Mbps 1877 Kpps
> >
> > this patchset with page_pool:
> > rx: 423.5 Mbps 882.3 Kpps
> >
> > This is the perf top when receiving traffic:
> >
> > 27.68% [kernel] [k] __page_pool_clean_page
>
> This seems extremly high on the list.
This looks related to the cost of dma unmap, as page_pool have
PP_FLAG_DMA_MAP. (It is a little strange, as page_pool have flag
DMA_ATTR_SKIP_CPU_SYNC, which should make it less expensive).
> > 9.79% [kernel] [k] get_page_from_freelist
You are clearly hitting page-allocator every time, because you are not
using page_pool recycle facility.
> > 7.18% [kernel] [k] free_unref_page
> > 4.64% [kernel] [k] build_skb
> > 4.63% [kernel] [k] __netif_receive_skb_core
> > 3.83% [mvpp2] [k] mvpp2_poll
> > 3.64% [kernel] [k] eth_type_trans
> > 3.61% [kernel] [k] kmem_cache_free
> > 3.03% [kernel] [k] kmem_cache_alloc
> > 2.76% [kernel] [k] dev_gro_receive
> > 2.69% [mvpp2] [k] mvpp2_bm_pool_put
> > 2.68% [kernel] [k] page_frag_free
> > 1.83% [kernel] [k] inet_gro_receive
> > 1.74% [kernel] [k] page_pool_alloc_pages
> > 1.70% [kernel] [k] __build_skb
> > 1.47% [kernel] [k] __alloc_pages_nodemask
> > 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0
> > 1.29% [kernel] [k] tcf_action_exec
> >
> > I tried Ilias patches for page_pool recycling, I get an improvement
> > to ~1100, but I'm still far than the original allocator.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists