[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150825152650.GI6285@dhcp22.suse.cz>
Date: Tue, 25 Aug 2015 17:26:51 +0200
From: Michal Hocko <mhocko@...nel.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
akpm@...ux-foundation.org, mgorman@...e.de, hannes@...xchg.org,
oleg@...hat.com, vbabka@...e.cz, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch -mm] mm, oom: add global access to memory reserves on
livelock
On Mon 24-08-15 14:10:10, David Rientjes wrote:
> On Fri, 21 Aug 2015, Tetsuo Handa wrote:
>
> > Why can't we think about choosing more OOM victims instead of granting access
> > to memory reserves?
> >
>
> We have no indication of which thread is holding a mutex that would need
> to be killed, so we'd be randomly killing processes waiting for forward
> progress. A worst-case scenario would be the thread is OOM_DISABLE and we
> kill every process on the system needlessly. This problem obviously
> occurs often enough that killing all userspace isnt going to be a viable
> solution.
>
> > Also, SysRq might not be usable under OOM because workqueues can get stuck.
> > The panic_on_oom_timeout was first proposed using a workqueue but was
> > updated to use a timer because there is no guarantee that workqueues work
> > as expected under OOM.
> >
>
> I don't know anything about a panic_on_oom_timeout,
You were CCed on the discussion
http://lkml.kernel.org/r/20150609170310.GA8990%40dhcp22.suse.cz
> but panicking would
> only be a reasonable action if memory reserves were fully depleted. That
> could easily be dealt with in the page allocator so there's no timeout
> involved.
As noted in other email. Just depletion is not a good indicator. The
system can still make a forward progress even when reserves are
depleted.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists