[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190108091217.GL31517@techsingularity.net>
Date: Tue, 8 Jan 2019 09:12:17 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux-MM <linux-mm@...ck.org>,
David Rientjes <rientjes@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>, ying.huang@...el.com,
kirill@...temov.name,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/25] Increase success rates and reduce latency of
compaction v2
On Mon, Jan 07, 2019 at 03:43:54PM -0800, Andrew Morton wrote:
> On Fri, 4 Jan 2019 12:49:46 +0000 Mel Gorman <mgorman@...hsingularity.net> wrote:
>
> > This series reduces scan rates and success rates of compaction, primarily
> > by using the free lists to shorten scans, better controlling of skip
> > information and whether multiple scanners can target the same block and
> > capturing pageblocks before being stolen by parallel requests. The series
> > is based on the 4.21/5.0 merge window after Andrew's tree had been merged.
> > It's known to rebase cleanly.
> >
> > ...
> >
> > include/linux/compaction.h | 3 +-
> > include/linux/gfp.h | 7 +-
> > include/linux/mmzone.h | 2 +
> > include/linux/sched.h | 4 +
> > kernel/sched/core.c | 3 +
> > mm/compaction.c | 1031 ++++++++++++++++++++++++++++++++++----------
> > mm/internal.h | 23 +-
> > mm/migrate.c | 2 +-
> > mm/page_alloc.c | 70 ++-
> > 9 files changed, 908 insertions(+), 237 deletions(-)
>
> Boy that's a lot of material.
It's unfortunate I know. It just turned out that there is a lot that had
to change to make the most important patches in the series work without
obvious side-effects.
> I just tossed it in there unread for
> now. Do you have any suggestions as to how we can move ahead with
> getting this appropriately reviewed and tested?
>
The main workloads that should see a difference are those that use
MADV_HUGEPAGE or change /sys/kernel/mm/transparent_hugepage/defrag. I'm
expecting MADV_HUGEPAGE is more common in practice. By default, there
should be little change as direct compaction is not used heavily for THP.
Although SLUB workloads might see a difference given a long enough uptime,
it will be relatively difficult to detect.
As this was partially motivated by the __GFP_THISNODE discussion, I
would like to hear from David if this series makes an impact, if any,
when starting Google workloads on a fragmented system.
Similarly, I would be interested in hearing if Andrea's KVM startup times
see any benefit. I'm expecting less here as I expect that workload is
still bound by reclaim thrashing the local node in reclaim. Still, a
confirmation would be nice and if there is any benefit then it's a plus
even if the workload gets reclaimed excessively.
Local tests didn't show up anything interesting *other* than what is
already in the changelogs as those workloads are specifically targetting
those paths. Intel LKP has not reported any regressions (functional or
performance) despite being on git.kernel.org for a few weeks. However,
as they are using default configurations, this is not much of a surprise.
Review is harder. Vlastimil would normally be the best fit as he has
worked on compaction but for him or for anyone else, I'm expecting they're
dealing with a backlog after the holidays. I know I still have to get
to Vlastimil's recent series on THP allocations so I'm guilty of the same
crime with respect to review.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists