[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20151207170041.c470d362915ae1b42a8a4ef8@linux-foundation.org>
Date: Mon, 7 Dec 2015 17:00:41 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Joonsoo Kim <js1304@...il.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Mel Gorman <mgorman@...e.de>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH] mm/compaction: restore COMPACT_CLUSTER_MAX to 32
On Thu, 3 Dec 2015 13:11:40 +0900 Joonsoo Kim <js1304@...il.com> wrote:
> Until now, COMPACT_CLUSTER_MAX is defined as SWAP_CLUSTER_MAX.
> Commit ("mm: increase SWAP_CLUSTER_MAX to batch TLB flushes")
> changes SWAP_CLUSTER_MAX from 32 to 256 to improve tlb flush performance
> so COMPACT_CLUSTER_MAX is also changed to 256.
"mm: increase SWAP_CLUSTER_MAX to batch TLB flushes" has been in limbo
for quite a while. Because it has been unclear whether the patch's
benefits exceed its costs+risks.
We should make a decision here - either do the appropriate testing or
drop the patch.
> But, it has
> no justification on compaction-side and I think that loss is more than
> benefit.
>
> One example is that migration scanner would isolates and migrates
> too many pages unnecessarily with 256 COMPACT_CLUSTER_MAX. It may be
> enough to migrate 4 pages in order to make order-2 page, but, now,
> compaction will migrate 256 pages.
>
> To reduce this unneeded overhead, this patch restores
> COMPACT_CLUSTER_MAX to 32.
>
> ...
>
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -155,7 +155,7 @@ enum {
> };
>
> #define SWAP_CLUSTER_MAX 256UL
> -#define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX
> +#define COMPACT_CLUSTER_MAX 32UL
>
> /*
> * Ratio between zone->managed_pages and the "gap" that above the per-zone
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists