[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181103135325.01a7b5d6@redhat.com>
Date: Sat, 3 Nov 2018 13:53:25 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Aaron Lu <aaron.lu@...el.com>
Cc: Saeed Mahameed <saeedm@...lanox.com>,
"pstaszewski@...are.pl" <pstaszewski@...are.pl>,
"eric.dumazet@...il.com" <eric.dumazet@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...lanox.com>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"yoel@...knet.dk" <yoel@...knet.dk>,
"mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
brouer@...hat.com
Subject: Re: Kernel 4.19 network performance - forwarding/routing normal
users traffic
On Fri, 2 Nov 2018 22:20:24 +0800 Aaron Lu <aaron.lu@...el.com> wrote:
> On Fri, Nov 02, 2018 at 12:40:37PM +0100, Jesper Dangaard Brouer wrote:
> > On Fri, 2 Nov 2018 13:23:56 +0800
> > Aaron Lu <aaron.lu@...el.com> wrote:
> >
> > > On Thu, Nov 01, 2018 at 08:23:19PM +0000, Saeed Mahameed wrote:
> > > > On Thu, 2018-11-01 at 23:27 +0800, Aaron Lu wrote:
> > > > > On Thu, Nov 01, 2018 at 10:22:13AM +0100, Jesper Dangaard Brouer
> > > > > wrote:
> > > > > ... ...
> > > > > > Section copied out:
> > > > > >
> > > > > > mlx5e_poll_tx_cq
> > > > > > |
> > > > > > --16.34%--napi_consume_skb
> > > > > > |
> > > > > > |--12.65%--__free_pages_ok
> > > > > > | |
> > > > > > | --11.86%--free_one_page
> > > > > > | |
> > > > > > | |--10.10%
> > > > > > --queued_spin_lock_slowpath
> > > > > > | |
> > > > > > | --0.65%--_raw_spin_lock
> > > > >
> > > > > This callchain looks like it is freeing higher order pages than order
> > > > > 0:
> > > > > __free_pages_ok is only called for pages whose order are bigger than
> > > > > 0.
> > > >
> > > > mlx5 rx uses only order 0 pages, so i don't know where these high order
> > > > tx SKBs are coming from..
> > >
> > > Perhaps here:
> > > __netdev_alloc_skb(), __napi_alloc_skb(), __netdev_alloc_frag() and
> > > __napi_alloc_frag() will all call page_frag_alloc(), which will use
> > > __page_frag_cache_refill() to get an order 3 page if possible, or fall
> > > back to an order 0 page if order 3 page is not available.
> > >
> > > I'm not sure if your workload will use the above code path though.
> >
> > TL;DR: this is order-0 pages (code-walk trough proof below)
> >
> > To Aaron, the network stack *can* call __free_pages_ok() with order-0
> > pages, via:
> >
> > static void skb_free_head(struct sk_buff *skb)
> > {
> > unsigned char *head = skb->head;
> >
> > if (skb->head_frag)
> > skb_free_frag(head);
> > else
> > kfree(head);
> > }
> >
> > static inline void skb_free_frag(void *addr)
> > {
> > page_frag_free(addr);
> > }
> >
> > /*
> > * Frees a page fragment allocated out of either a compound or order 0 page.
> > */
> > void page_frag_free(void *addr)
> > {
> > struct page *page = virt_to_head_page(addr);
> >
> > if (unlikely(put_page_testzero(page)))
> > __free_pages_ok(page, compound_order(page));
> > }
> > EXPORT_SYMBOL(page_frag_free);
>
> I think here is a problem - order 0 pages are freed directly to buddy,
> bypassing per-cpu-pages. This might be the reason lock contention
> appeared on free path.
OMG - you just found a significant issue with the network stacks
interaction with the page allocator! This explains why I could not get
the PCP (Per-Cpu-Pages) system to have good performance, in my
performance networking benchmarks. As we are basically only using the
alloc side of PCP, and not the free side.
We have spend years adding different driver level recycle tricks to
avoid this code path getting activated, exactly because it is rather
slow and problematic that we hit this zone->lock.
> Can someone apply below diff and see if lock contention is gone?
I have also applied and tested this patch, and yes the lock contention
is gone. As mentioned is it rather difficult to hit this code path, as
the driver page recycle mechanism tries to hide/avoid it, but mlx5 +
page_pool + CPU-map recycling have a known weakness that bypass the
driver page recycle scheme (that I've not fixed yet). I observed a 7%
speedup for this micro benchmark.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e2ef1c17942f..65c0ae13215a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4554,8 +4554,14 @@ void page_frag_free(void *addr)
> {
> struct page *page = virt_to_head_page(addr);
>
> - if (unlikely(put_page_testzero(page)))
> - __free_pages_ok(page, compound_order(page));
> + if (unlikely(put_page_testzero(page))) {
> + unsigned int order = compound_order(page);
> +
> + if (order == 0)
> + free_unref_page(page);
> + else
> + __free_pages_ok(page, order);
> + }
> }
> EXPORT_SYMBOL(page_frag_free);
Thank you Aaron for spotting this!!!
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists