[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ba8a6810cb481204deae4a7171dded1d8b5e736.camel@nvidia.com>
Date: Wed, 11 Sep 2019 22:33:39 +0000
From: Nitin Gupta <nigupta@...dia.com>
To: "mhocko@...nel.org" <mhocko@...nel.org>
CC: "willy@...radead.org" <willy@...radead.org>,
"allison@...utok.net" <allison@...utok.net>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"aryabinin@...tuozzo.com" <aryabinin@...tuozzo.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"rppt@...ux.vnet.ibm.com" <rppt@...ux.vnet.ibm.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"cai@....pw" <cai@....pw>,
"arunks@...eaurora.org" <arunks@...eaurora.org>,
"yuzhao@...gle.com" <yuzhao@...gle.com>,
"richard.weiyang@...il.com" <richard.weiyang@...il.com>,
"mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
"khalid.aziz@...cle.com" <khalid.aziz@...cle.com>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>
Subject: Re: [PATCH] mm: Add callback for defining compaction completion
On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> [...]
> > > On Tue 10-09-19 13:07:32, Nitin Gupta wrote:
> > > > For some applications we need to allocate almost all memory as
> > > > hugepages.
> > > > However, on a running system, higher order allocations can fail if the
> > > > memory is fragmented. Linux kernel currently does on-demand
> > > > compaction
> > > > as we request more hugepages but this style of compaction incurs very
> > > > high latency. Experiments with one-time full memory compaction
> > > > (followed by hugepage allocations) shows that kernel is able to
> > > > restore a highly fragmented memory state to a fairly compacted memory
> > > > state within <1 sec for a 32G system. Such data suggests that a more
> > > > proactive compaction can help us allocate a large fraction of memory
> > > > as hugepages keeping allocation latencies low.
> > > >
> > > > In general, compaction can introduce unexpected latencies for
> > > > applications that don't even have strong requirements for contiguous
> > > > allocations.
>
> Could you expand on this a bit please? Gfp flags allow to express how
> much the allocator try and compact for a high order allocations. Hugetlb
> allocations tend to require retrying and heavy compaction to succeed and
> the success rate tends to be pretty high from my experience. Why that
> is not case in your case?
>
Yes, I have the same observation: with `GFP_TRANSHUGE |
__GFP_RETRY_MAYFAIL` I get very good success rate (~90% of free RAM
allocated as hugepages). However, what I'm trying to point out is that this
high success rate comes with high allocation latencies (90th percentile
latency of 2206us). On the same system, the same high-order allocations
which hit the fast path have latency <5us.
> > > > It is also hard to efficiently determine if the current
> > > > system state can be easily compacted due to mixing of unmovable
> > > > memory. Due to these reasons, automatic background compaction by the
> > > > kernel itself is hard to get right in a way which does not hurt
> > > > unsuspecting
> > > applications or waste CPU cycles.
> > >
> > > We do trigger background compaction on a high order pressure from the
> > > page allocator by waking up kcompactd. Why is that not sufficient?
> > >
> >
> > Whenever kcompactd is woken up, it does just enough work to create
> > one free page of the given order (compaction_control.order) or higher.
>
> This is an implementation detail IMHO. I am pretty sure we can do a
> better auto tuning when there is an indication of a constant flow of
> high order requests. This is no different from the memory reclaim in
> principle. Just because the kswapd autotuning not fitting with your
> particular workload you wouldn't want to export direct reclaim
> functionality and call it from a random module. That is just doomed to
> fail because different subsystems in control just leads to decisions
> going against each other.
>
I don't want to go the route of adding any auto-tuning/perdiction code to
control compaction in the kernel. I'm more inclined towards extending
existing interfaces to allow compaction behavior to be controlled either
from userspace or a kernel driver. Letting a random module control
compaction or a root process pumping new tunables from sysfs is the same in
principle.
This patch is in the spirit of simple extension to existing
compaction_zone_order() which allows either a kernel driver or userspace
(through sysfs) to control compaction.
Also, we should avoid driving hard parallels between reclaim and
compaction: the former is often necessary for forward progress while the
latter is often an optimization. Since contiguous allocations are mostly
optimizations it's good to expose hooks from the kernel that let user
(through a driver or userspace) control it using its own heuristics.
I thought hard about whats lacking in current userspace interface (sysfs):
- /proc/sys/vm/compact_memory: full system compaction is not an option as
a viable pro-active compaction strategy.
- possibly expose [low, high] threshold values for each node and let
kcompactd act on them. This was my approach for my original patch I
linked earlier. Problem here is that it introduces too many tunables.
Considering the above, I came up with this callback approach which make it
trivial to introduce user specific policies for compaction. It puts the
onus of system stability, responsive in the hands of user without burdening
admins with more tunables or adding crystal balls to kernel.
> > Such a design causes very high latency for workloads where we want
> > to allocate lots of hugepages in short period of time. With pro-active
> > compaction we can hide much of this latency. For some more background
> > discussion and data, please see this thread:
> >
> > https://patchwork.kernel.org/patch/11098289/
>
> I am aware of that thread. And there are two things. You claim the
> allocation success rate is unnecessarily lower and that the direct
> latency is high. You simply cannot assume both low latency and high
> success rate. Compaction is not free. Somebody has to do the work.
> Hiding it into the background means that you are eating a lot of cycles
> from everybody else (think of a workload running in a restricted cpu
> controller just doing a lot of work in an unaccounted context).
>
> That being said you really have to be prepared to pay a price for
> precious resource like high order pages.
>
> On the other hand I do understand that high latency is not really
> desired for a more optimistic allocation requests with a reasonable
> fallback strategy. Those would benefit from kcompactd not giving up too
> early.
Doing pro-active compaction in background has merits in reducing reducing
high-order alloc latency. Its true that it would end up burning cycles with
little benefit in some cases. Its upto the user of this new interface to
back off if it detects such a case.
>
> > > > Even with these caveats, pro-active compaction can still be very
> > > > useful in certain scenarios to reduce hugepage allocation latencies.
> > > > This callback interface allows drivers to drive compaction based on
> > > > their own policies like the current level of external fragmentation
> > > > for a particular order, system load etc.
> > >
> > > So we do not trust the core MM to make a reasonable decision while we
> > > give
> > > a free ticket to modules. How does this make any sense at all? How is a
> > > random module going to make a more informed decision when it has less
> > > visibility on the overal MM situation.
> > >
> >
> > Embedding any specific policy (like: keep external fragmentation for
> > order-9
> > between 30-40%) within MM core looks like a bad idea.
>
> Agreed
>
> > As a driver, we
> > can easily measure parameters like system load, current fragmentation
> > level
> > for any order in any zone etc. to make an informed decision.
> > See the thread I refereed above for more background discussion.
>
> Do that from the userspace then. If there is an insufficient interface
> to do that then let's talk about what is missing.
>
Currently we only have a proc interface to do full system compaction.
Here's what missing from this interface: ability to set per-node, per-zone,
per-order, [low, high] extfrag thresholds. This is what I exposed in my
earlier patch titled 'proactive compaction'. Discussion there made me realize
these are too many tunables and any admin would always get them wrong. Even
if intended user of these sysfs node is some monitoring daemon, its
tempting to mess with them.
With a callback extension to compact_zone_order() implementing any of the
per-node, per-zone, per-order limits is straightforward and if needed the
driver can expose debugfs/sysfs nodes if needed at all. (nvcompact.c
driver[1] exposes these tunables as debugfs nodes, for example).
[1] https://gitlab.com/nigupta/linux/snippets/1894161
> > > If you need to control compaction from the userspace you have an
> > > interface
> > > for that. It is also completely unexplained why you need a completion
> > > callback.
> > >
> >
> > /proc/sys/vm/compact_memory does whole system compaction which is
> > often too much as a pro-active compaction strategy. To get more control
> > over how to compaction work to do, I have added a compaction callback
> > which controls how much work is done in one compaction cycle.
>
> Why is a more fine grained control really needed? Sure compacting
> everything is heavy weight but how often do you have to do that. Your
> changelog starts with a usecase when there is a high demand for large
> pages at the startup. What prevents you do compaction at that time. If
> the workload is longterm then the initial price should just pay back,
> no?
>
Compacting all NUMA nodes is not practical on large systems in response to,
say, launching a DB process on a certain node. Also, the frequency of
hugepage allocation burts may be completely unpredictable. That's why
background compaction can keep extfrag in check, say while system is
lightly loaded (adhoc policy), keeping high-order allocation latencies low
whenever the burst shows up.
- Nitin
Powered by blists - more mailing lists