[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210509171748.8dbc70ceccc5cc1ae61fe41c@linux-foundation.org>
Date: Sun, 9 May 2021 17:17:48 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: chukaiping <chukaiping@...du.com>
Cc: mcgrof@...nel.org, keescook@...omium.org, yzaikin@...gle.com,
vbabka@...e.cz, nigupta@...dia.com, bhe@...hat.com,
khalid.aziz@...cle.com, iamjoonsoo.kim@....com,
mateusznosek0@...il.com, sh_def@....com,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, Mel Gorman <mgorman@...hsingularity.net>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v4] mm/compaction: let proactive compaction order
configurable
On Wed, 28 Apr 2021 10:28:21 +0800 chukaiping <chukaiping@...du.com> wrote:
> Currently the proactive compaction order is fixed to
> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> normal 4KB memory, but it's too high for the machines with small
> normal memory, for example the machines with most memory configured
> as 1GB hugetlbfs huge pages. In these machines the max order of
> free pages is often below 9, and it's always below 9 even with hard
> compaction. This will lead to proactive compaction be triggered very
> frequently. In these machines we only care about order of 3 or 4.
> This patch export the oder to proc and let it configurable
> by user, and the default value is still COMPACTION_HPAGE_ORDER.
It would be great to do this automatically? It's quite simple to see
when memory is being handed out to hugetlbfs - so can we tune
proactive_compaction_order in response to this? That would be far
better than adding a manual tunable.
But from having read Khalid's comments, that does sound quite involved.
Is there some partial solution that we can come up with that will get
most people out of trouble?
That being said, this patch is super-super-simple so perhaps we should
just merge it just to get one person (and hopefully a few more) out of
trouble. But on the other hand, once we add a /proc tunable we must
maintain that tunable for ever (or at least a very long time) even if
the internal implementations change a lot.
Powered by blists - more mailing lists