[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190411171909.GB5136@cmpxchg.org>
Date: Thu, 11 Apr 2019 13:19:09 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org,
rientjes@...gle.com, willy@...radead.org,
yuzhoujian@...ichuxing.com, jrdr.linux@...il.com, guro@...com,
penguin-kernel@...ove.sakura.ne.jp, ebiederm@...ssion.com,
shakeelb@...gle.com, christian@...uner.io, minchan@...nel.org,
timmurray@...gle.com, dancol@...gle.com, joel@...lfernandes.org,
jannh@...gle.com, linux-mm@...ck.org,
lsf-pc@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [RFC 0/2] opportunistic memory reclaim of a killed process
On Thu, Apr 11, 2019 at 12:51:11PM +0200, Michal Hocko wrote:
> I would question whether we really need this at all? Relying on the exit
> speed sounds like a fundamental design problem of anything that relies
> on it. Sure task exit might be slow, but async mm tear down is just a
> mere optimization this is not guaranteed to really help in speading
> things up. OOM killer uses it as a guarantee for a forward progress in a
> finite time rather than as soon as possible.
I don't think it's flawed, it's just optimizing the user experience as
best as it can. You don't want to kill things prematurely, but once
there is pressure you want to rectify it quickly. That's valid.
We have a tool that does this, side effect or not, so I think it's
fair to try to make use of it when oom killing from userspace (which
we explictily support with oom_control in cgroup1 and memory.high in
cgroup2, and it's not just an Android thing).
The question is how explicit a contract we want to make with
userspace, and I would much prefer to not overpromise on a best-effort
thing like this, or even making the oom reaper ABI.
If unconditionally reaping killed tasks is too expensive, I'd much
prefer a simple kill hint over an explicit task reclaim interface.
Powered by blists - more mailing lists