[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR12MB3416361959DA9128870E92B7D8469@BYAPR12MB3416.namprd12.prod.outlook.com>
Date: Thu, 22 Apr 2021 16:27:00 +0000
From: Nitin Gupta <nigupta@...dia.com>
To: chukaiping <chukaiping@...du.com>,
"mcgrof@...nel.org" <mcgrof@...nel.org>,
"keescook@...omium.org" <keescook@...omium.org>,
"yzaikin@...gle.com" <yzaikin@...gle.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"bhe@...hat.com" <bhe@...hat.com>,
"khalid.aziz@...cle.com" <khalid.aziz@...cle.com>,
"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
"mateusznosek0@...il.com" <mateusznosek0@...il.com>,
"sh_def@....com" <sh_def@....com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH v2] mm/compaction:let proactive compaction order
configurable
> -----Original Message-----
> From: chukaiping <chukaiping@...du.com>
> Sent: Wednesday, April 21, 2021 12:22 AM
> To: mcgrof@...nel.org; keescook@...omium.org; yzaikin@...gle.com;
> akpm@...ux-foundation.org; vbabka@...e.cz; Nitin Gupta
> <nigupta@...dia.com>; bhe@...hat.com; khalid.aziz@...cle.com;
> iamjoonsoo.kim@....com; mateusznosek0@...il.com; sh_def@....com
> Cc: linux-kernel@...r.kernel.org; linux-fsdevel@...r.kernel.org; linux-
> mm@...ck.org
> Subject: [PATCH v2] mm/compaction:let proactive compaction order
> configurable
>
> Currently the proactive compaction order is fixed to
> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> normal 4KB memory, but it's too high for the machines with small normal
> memory, for example the machines with most memory configured as 1GB
> hugetlbfs huge pages. In these machines the max order of free pages is often
> below 9, and it's always below 9 even with hard compaction. This will lead to
> proactive compaction be triggered very frequently. In these machines we only
> care about order of 3 or 4.
> This patch export the oder to proc and let it configurable by user, and the
> default value is still COMPACTION_HPAGE_ORDER.
>
I agree with the idea of making the target order configurable as you may not
always care about the hugepage order in particular.
> Signed-off-by: chukaiping <chukaiping@...du.com>
> Reported-by: kernel test robot <lkp@...el.com>
> ---
>
> Changes in v2:
> - fix the compile error in ia64 and powerpc
> - change the hard coded max order number from 10 to MAX_ORDER - 1
>
> include/linux/compaction.h | 1 +
> kernel/sysctl.c | 11 +++++++++++
> mm/compaction.c | 14 +++++++++++---
> 3 files changed, 23 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h index
> ed4070e..151ccd1 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -83,6 +83,7 @@ static inline unsigned long compact_gap(unsigned int
> order) #ifdef CONFIG_COMPACTION extern int sysctl_compact_memory;
> extern unsigned int sysctl_compaction_proactiveness;
> +extern unsigned int sysctl_compaction_order;
> extern int sysctl_compaction_handler(struct ctl_table *table, int write,
> void *buffer, size_t *length, loff_t *ppos); extern int
> sysctl_extfrag_threshold; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index
> 62fbd09..a607d4d 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -195,6 +195,8 @@ enum sysctl_writes_mode { #endif /* CONFIG_SMP
> */ #endif /* CONFIG_SCHED_DEBUG */
>
> +static int max_buddy_zone = MAX_ORDER - 1;
> +
> #ifdef CONFIG_COMPACTION
> static int min_extfrag_threshold;
> static int max_extfrag_threshold = 1000; @@ -2871,6 +2873,15 @@ int
> proc_do_static_key(struct ctl_table *table, int write,
> .extra2 = &one_hundred,
> },
> {
> + .procname = "compaction_order",
> + .data = &sysctl_compaction_order,
> + .maxlen = sizeof(sysctl_compaction_order),
> + .mode = 0644,
> + .proc_handler = proc_dointvec_minmax,
> + .extra1 = SYSCTL_ZERO,
This should be SYSCTL_ONE. Fragmentation wrt order 0 is always going to be 0.
Thanks,
Nitin
Powered by blists - more mailing lists