lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Jan 2016 17:58:50 -0500
From:	Johannes Weiner <hannes@...xchg.org>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
	mhocko@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
	mgorman@...e.de, torvalds@...ux-foundation.org, oleg@...hat.com,
	hughd@...gle.com, andrea@...nel.org, riel@...hat.com,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm,oom: Re-enable OOM killer using timers.

On Thu, Jan 14, 2016 at 02:01:45PM -0800, David Rientjes wrote:
> On Thu, 14 Jan 2016, Tetsuo Handa wrote:
> > I know. What I'm proposing is try to recover by killing more OOM-killable
> > tasks because I think impact of crashing the kernel is larger than impact
> > of killing all OOM-killable tasks. We should at least try OOM-kill all
> > OOM-killable processes before crashing the kernel. Some servers take many
> > minutes to reboot whereas restarting OOM-killed services takes only a few
> > seconds. Also, SysRq-i is inconvenient because it kills OOM-unkillable ssh
> > daemon process.
> 
> This is where me and you disagree; the goal should not be to continue to 
> oom kill more and more processes since there is no guarantee that further 
> kills will result in forward progress.  These additional kills can result 
> in the same livelock that is already problematic, and killing additional 
> processes has made the situation worse since memory reserves are more 
> depleted.
> 
> I believe what is better is to exhaust reclaim, check if the page 
> allocator is constantly looping due to waiting for the same victim to 
> exit, and then allowing that allocation with memory reserves, see the 
> attached patch which I have proposed before.

If giving the reserves to another OOM victim is bad, how is giving
them to the *allocating* task supposed to be better? Which path is
more likely to release memory? That doesn't seem to follow.

We need to make the OOM killer conclude in a fixed amount of time, no
matter what happens. If the system is irrecoverably deadlocked on
memory it needs to panic (and reboot) so we can get on with it. And
it's silly to panic while there are still killable tasks available.

Hence my proposal to wait a decaying amount of time after each OOM
victim before moving on, until we killed everything in the system and
panic (and reboot). What else is there we can do once out of memory?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ