[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170612110612.GG7476@dhcp22.suse.cz>
Date: Mon, 12 Jun 2017 13:06:13 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: hannes@...xchg.org, akpm@...ux-foundation.org, guro@...com,
vdavydov.dev@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/2] mm, oom: do not trigger out_of_memory from the
#PF
On Mon 12-06-17 19:48:03, Tetsuo Handa wrote:
> Michal Hocko wrote:
[...]
> > Without this patch
> > this would be impossible.
>
> What I wanted to say is that, with this patch, you are introducing possibility
> of lockup. "Retrying the whole page fault path when page fault allocations
> failed but the OOM killer does not trigger" helps nothing. It will just spin
> wasting CPU time until somebody else invokes the OOM killer.
But this is very same with what we do in the page allocator already. We
keep retrying relying on somebody else making a forward progress on our
behalf for those requests which in a weaker reclaim context. So what
would be a _new_ lockup that didn't exist with the current code?
As I've already said (and I haven't heard a counter argument yet)
unwinding to the PF has a nice advantage that the whole locking context
will be gone as well. So unlike in the page allocator we can allow
others to make a forward progress. This sounds like an advantage to me.
The only possibility for a new lockup I can see is that some PF callpath
returned VM_FAULT_OOM without doing an actual allocation (aka leaked
VM_FAULT_OOM) and in that case it is a bug in that call path. Why should
we trigger a _global_ disruption action when the bug is specific to a
particular process? Moreover the global OOM killer will only stop this
path to refault by killing it which can happen after quite some other
processes being killed.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists