[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181120014313.GA10657@intel.com>
Date: Tue, 20 Nov 2018 09:43:13 +0800
From: Aaron Lu <aaron.lu@...el.com>
To: Tariq Toukan <tariqt@...lanox.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Paweł Staszewski <pstaszewski@...are.pl>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Yoel Caspersen <yoel@...knet.dk>,
Mel Gorman <mgorman@...hsingularity.net>,
Saeed Mahameed <saeedm@...lanox.com>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Ian Kumlien <ian.kumlien@...il.com>
Subject: Re: [PATCH v2 RESEND 1/2] mm/page_alloc: free order-0 pages through
PCP in page_frag_free()
On Mon, Nov 19, 2018 at 03:00:53PM +0000, Tariq Toukan wrote:
>
>
> On 19/11/2018 3:48 PM, Aaron Lu wrote:
> > page_frag_free() calls __free_pages_ok() to free the page back to
> > Buddy. This is OK for high order page, but for order-0 pages, it
> > misses the optimization opportunity of using Per-Cpu-Pages and can
> > cause zone lock contention when called frequently.
> >
> > Paweł Staszewski recently shared his result of 'how Linux kernel
> > handles normal traffic'[1] and from perf data, Jesper Dangaard Brouer
> > found the lock contention comes from page allocator:
> >
> > mlx5e_poll_tx_cq
> > |
> > --16.34%--napi_consume_skb
> > |
> > |--12.65%--__free_pages_ok
> > | |
> > | --11.86%--free_one_page
> > | |
> > | |--10.10%--queued_spin_lock_slowpath
> > | |
> > | --0.65%--_raw_spin_lock
> > |
> > |--1.55%--page_frag_free
> > |
> > --1.44%--skb_release_data
> >
> > Jesper explained how it happened: mlx5 driver RX-page recycle
> > mechanism is not effective in this workload and pages have to go
> > through the page allocator. The lock contention happens during
> > mlx5 DMA TX completion cycle. And the page allocator cannot keep
> > up at these speeds.[2]
> >
> > I thought that __free_pages_ok() are mostly freeing high order
> > pages and thought this is an lock contention for high order pages
> > but Jesper explained in detail that __free_pages_ok() here are
> > actually freeing order-0 pages because mlx5 is using order-0 pages
> > to satisfy its page pool allocation request.[3]
> >
> > The free path as pointed out by Jesper is:
> > skb_free_head()
> > -> skb_free_frag()
> > -> page_frag_free()
> > And the pages being freed on this path are order-0 pages.
> >
> > Fix this by doing similar things as in __page_frag_cache_drain() -
> > send the being freed page to PCP if it's an order-0 page, or
> > directly to Buddy if it is a high order page.
> >
> > With this change, Paweł hasn't noticed lock contention yet in
> > his workload and Jesper has noticed a 7% performance improvement
> > using a micro benchmark and lock contention is gone. Ilias' test
> > on a 'low' speed 1Gbit interface on an cortex-a53 shows ~11%
> > performance boost testing with 64byte packets and __free_pages_ok()
> > disappeared from perf top.
> >
> > [1]: https://www.spinics.net/lists/netdev/msg531362.html
> > [2]: https://www.spinics.net/lists/netdev/msg531421.html
> > [3]: https://www.spinics.net/lists/netdev/msg531556.html
> >
> > Reported-by: Paweł Staszewski <pstaszewski@...are.pl>
> > Analysed-by: Jesper Dangaard Brouer <brouer@...hat.com>
> > Acked-by: Vlastimil Babka <vbabka@...e.cz>
> > Acked-by: Mel Gorman <mgorman@...hsingularity.net>
> > Acked-by: Jesper Dangaard Brouer <brouer@...hat.com>
> > Acked-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> > Tested-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> > Acked-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> > Acked-by: Tariq Toukan <tariqt@...lanox.com
>
> missing '>' sign in my email tag.
Sorry about that, will fix this and resend.
> > Signed-off-by: Aaron Lu <aaron.lu@...el.com>
> > ---
Powered by blists - more mailing lists