[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGnkfhzYdqBPvRM8j98HVMzeHSbJ8RyVH+nLpoKBuz2iqErPog@mail.gmail.com>
Date: Tue, 24 Dec 2019 15:37:49 +0100
From: Matteo Croce <mcroce@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Maxime Chevallier <maxime.chevallier@...tlin.com>,
Antoine Tenart <antoine.tenart@...tlin.com>,
Luka Perkov <luka.perkov@...tura.hr>,
Tomislav Tomasic <tomislav.tomasic@...tura.hr>,
Marcin Wojtas <mw@...ihalf.com>,
Stefan Chulski <stefanc@...vell.com>,
Nadav Haklai <nadavh@...vell.com>
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
On Tue, Dec 24, 2019 at 3:01 PM Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
>
> On Tue, 24 Dec 2019 11:52:29 +0200
> Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
>
> > On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > > This patches change the memory allocator of mvpp2 from the frag allocator to
> > > the page_pool API. This change is needed to add later XDP support to mvpp2.
> > >
> > > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > > more slower. This is the tc drop rate measured with a single flow:
> > >
> > > stock net-next with frag allocator:
> > > rx: 900.7 Mbps 1877 Kpps
> > >
> > > this patchset with page_pool:
> > > rx: 423.5 Mbps 882.3 Kpps
> > >
> > > This is the perf top when receiving traffic:
> > >
> > > 27.68% [kernel] [k] __page_pool_clean_page
> >
> > This seems extremly high on the list.
>
> This looks related to the cost of dma unmap, as page_pool have
> PP_FLAG_DMA_MAP. (It is a little strange, as page_pool have flag
> DMA_ATTR_SKIP_CPU_SYNC, which should make it less expensive).
>
>
> > > 9.79% [kernel] [k] get_page_from_freelist
>
> You are clearly hitting page-allocator every time, because you are not
> using page_pool recycle facility.
>
>
> > > 7.18% [kernel] [k] free_unref_page
> > > 4.64% [kernel] [k] build_skb
> > > 4.63% [kernel] [k] __netif_receive_skb_core
> > > 3.83% [mvpp2] [k] mvpp2_poll
> > > 3.64% [kernel] [k] eth_type_trans
> > > 3.61% [kernel] [k] kmem_cache_free
> > > 3.03% [kernel] [k] kmem_cache_alloc
> > > 2.76% [kernel] [k] dev_gro_receive
> > > 2.69% [mvpp2] [k] mvpp2_bm_pool_put
> > > 2.68% [kernel] [k] page_frag_free
> > > 1.83% [kernel] [k] inet_gro_receive
> > > 1.74% [kernel] [k] page_pool_alloc_pages
> > > 1.70% [kernel] [k] __build_skb
> > > 1.47% [kernel] [k] __alloc_pages_nodemask
> > > 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0
> > > 1.29% [kernel] [k] tcf_action_exec
> > >
> > > I tried Ilias patches for page_pool recycling, I get an improvement
> > > to ~1100, but I'm still far than the original allocator.
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
The change I did to use the recycling is the following:
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3071,7 +3071,7 @@ static int mvpp2_rx(struct mvpp2_port *port,
struct napi_struct *napi,
if (pp)
- page_pool_release_page(pp, virt_to_page(data));
+ skb_mark_for_recycle(skb, virt_to_page(data), &rxq->xdp_rxq.mem);
else
dma_unmap_single_attrs(dev->dev.parent, dma_addr,
--
Matteo Croce
per aspera ad upstream
Powered by blists - more mailing lists