lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Jun 2016 09:27:41 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:	linux-mm@...ck.org, rientjes@...gle.com, oleg@...hat.com,
	vdavydov@...allels.com, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/10 -v3] Handle oom bypass more gracefully

On Wed 08-06-16 06:49:24, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > OK, so you are arming the timer for each mark_oom_victim regardless
> > of the oom context. This means that you have replaced one potential
> > lockup by other potential livelocks. Tasks from different oom domains
> > might interfere here...
> > 
> > Also this code doesn't even seem easier. It is surely less lines of
> > code but it is really hard to realize how would the timer behave for
> > different oom contexts.
> 
> If you worry about interference, we can use per signal_struct timestamp.
> I used per task_struct timestamp in my earlier versions (where per
> task_struct TIF_MEMDIE check was used instead of per signal_struct
> oom_victims).

This would allow pre-mature new victim selection for very large victims
(note that exit_mmap can take a while depending on the mm size). It also
pushed the timeout heuristic for everybody which will sooner or later
open a question why is this $NUMBER rathen than $NUMBER+$FOO.

[...]
> But expiring timeout by sleeping inside oom_kill_process() prevents other
> threads which are OOM-killed from obtaining TIF_MEMDIE, for anybody needs
> to wait for oom_lock in order to obtain TIF_MEMDIE.

True, but please note that this will happen only for the _unlikely_ case
when the mm is shared with kthread or init. All other cases would rely
on the oom_reaper which has a feedback mechanism to tell the oom killer
to move on if something bad is going on.

> Unless you set TIF_MEMDIE to all OOM-killed threads from
> oom_kill_process() or allow the caller context to use
> ALLOC_NO_WATERMARKS by checking whether current was already OOM-killed
> rather than TIF_MEMDIE, attempt to expiring timeout by sleeping inside
> oom_kill_process() is useless.

Well this is a rather strong statement for a highly unlikely corner
case, don't you think? I do not mind fortifying this class of cases some
more if we ever find out they are a real problem but I would rather make
sure they cannot lockup at this stage rather than optimize for them.

To be honest I would rather explore ways to handle kthread case (which
is the only real one IMHO from the two) gracefully and made them a
nonissue - e.g. enforce EFAULT on a dead mm during the kthread page fault
or something similar.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ