[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB6CXpApnNdHRVAwdm2GAqoLURvQ5VLFM6bogrRFubGZUt-60Q@mail.gmail.com>
Date: Thu, 28 May 2020 14:29:19 -0700
From: Nitin Gupta <ngupta@...ingupta.dev>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Nitin Gupta <nigupta@...dia.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
David Rientjes <rientjes@...gle.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Linux API <linux-api@...r.kernel.org>
Subject: Re: [PATCH v5] mm: Proactive compaction
On Wed, May 27, 2020 at 3:18 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > For some applications, we need to allocate almost all memory as
> > hugepages. However, on a running system, higher-order allocations can
> > fail if the memory is fragmented. Linux kernel currently does on-demand
> > compaction as we request more hugepages, but this style of compaction
> > incurs very high latency. Experiments with one-time full memory
> > compaction (followed by hugepage allocations) show that kernel is able
> > to restore a highly fragmented memory state to a fairly compacted memory
> > state within <1 sec for a 32G system. Such data suggests that a more
> > proactive compaction can help us allocate a large fraction of memory as
> > hugepages keeping allocation latencies low.
> >
> > For a more proactive compaction, the approach taken here is to define
> > a new tunable called 'proactiveness' which dictates bounds for external
> > fragmentation wrt HUGETLB_PAGE_ORDER order which kcompactd tries to
>
> HPAGE_PMD_ORDER
>
Since HPAGE_PMD_ORDER is not always defined, and thus we may have
to fallback to HUGETLB_PAGE_ORDER or even PMD_ORDER, I think
I should remove references to the order in the patch description entirely.
I also need to change the tunable name from 'proactiveness' to
'vm.compaction_proactiveness' sysctl.
modified description:
===
For a more proactive compaction, the approach taken here is to define
a new sysctl called 'vm.compaction_proactiveness' which dictates
bounds for external fragmentation which kcompactd tries to ...
===
> >
> > The tunable is exposed through sysctl:
> > /proc/sys/vm/compaction_proactiveness
> >
> > It takes value in range [0, 100], with a default of 20.
> >
> >
> > This patch is largely based on ideas from Michal Hocko posted here:
> > https://lore.kernel.org/linux-mm/20161230131412.GI13301@dhcp22.suse.cz/
>
> Make this link a [2] reference? I would also add: "See also the LWN article
> [3]." where [3] is https://lwn.net/Articles/817905/
>
>
Sounds good. I will turn these into [2] and [3] references.
>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
>
> With some smaller nitpicks below.
>
> But as we are adding a new API, I would really appreciate others comment about
> the approach at least.
>
> > +/*
> > + * A zone's fragmentation score is the external fragmentation wrt to the
> > + * HUGETLB_PAGE_ORDER scaled by the zone's size. It returns a value in the
>
> HPAGE_PMD_ORDER
>
Maybe just remove reference to the order as I mentioned above?
> > +/*
> > + * Tunable for proactive compaction. It determines how
> > + * aggressively the kernel should compact memory in the
> > + * background. It takes values in the range [0, 100].
> > + */
> > +int sysctl_compaction_proactiveness = 20;
>
> These are usually __read_mostly
>
Ok.
> > +
> > /*
> > * This is the entry point for compacting all nodes via
> > * /proc/sys/vm/compact_memory
> > @@ -2637,6 +2769,7 @@ static int kcompactd(void *p)
> > {
> > pg_data_t *pgdat = (pg_data_t*)p;
> > struct task_struct *tsk = current;
> > + unsigned int proactive_defer = 0;
> >
> > const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
> >
> > @@ -2652,12 +2785,34 @@ static int kcompactd(void *p)
> > unsigned long pflags;
> >
> > trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
> > - wait_event_freezable(pgdat->kcompactd_wait,
> > - kcompactd_work_requested(pgdat));
> > + if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
> > + kcompactd_work_requested(pgdat),
> > + msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC))) {
>
> Hmm perhaps the wakeups should also backoff if there's nothing to do?
Perhaps. For now, I just wanted to keep it simple and waking a thread to do a
quick calculation didn't seem expensive to me, so I prefer this simplistic
approach for now.
> > +/*
> > + * Calculates external fragmentation within a zone wrt the given order.
> > + * It is defined as the percentage of pages found in blocks of size
> > + * less than 1 << order. It returns values in range [0, 100].
> > + */
> > +int extfrag_for_order(struct zone *zone, unsigned int order)
> > +{
> > + struct contig_page_info info;
> > +
> > + fill_contig_page_info(zone, order, &info);
> > + if (info.free_pages == 0)
> > + return 0;
> > +
> > + return (info.free_pages - (info.free_blocks_suitable << order)) * 100
> > + / info.free_pages;
>
> I guess this should also use div_u64() like __fragmentation_index() does.
>
Ok.
> > +}
> > +
> > /* Same as __fragmentation index but allocs contig_page_info on stack */
> > int fragmentation_index(struct zone *zone, unsigned int order)
> > {
> >
>
Thanks,
Nitin
Powered by blists - more mailing lists