[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191018115849.GH4065@codeblueprint.co.uk>
Date: Fri, 18 Oct 2019 12:58:49 +0100
From: Matt Fleming <matt@...eblueprint.co.uk>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, Linux-MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/3] Recalculate per-cpu page allocator batch and high
limits after deferred meminit
On Fri, 18 Oct, at 11:56:03AM, Mel Gorman wrote:
> A private report stated that system CPU usage was excessive on an AMD
> EPYC 2 machine while building kernels with much longer build times than
> expected. The issue is partially explained by high zone lock contention
> due to the per-cpu page allocator batch and high limits being calculated
> incorrectly. This series addresses a large chunk of the problem. Patch 1
> is mostly cosmetic but prepares for patch 2 which is the real fix. Patch
> 3 is definiely cosmetic but was noticed while implementing the fix. Proper
> details are in the changelog for patch 2.
>
> include/linux/mm.h | 3 ---
> mm/internal.h | 3 +++
> mm/page_alloc.c | 33 ++++++++++++++++++++-------------
> 3 files changed, 23 insertions(+), 16 deletions(-)
Just to confirm, these patches don't fix the issue we're seeing on the
EPYC 2 machines, but they do return the batch sizes to sensible values.
Powered by blists - more mailing lists