lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 5 Nov 2014 14:31:00 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Tejun Heo <tj@...nel.org>
Cc:	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Cong Wang <xiyou.wangcong@...il.com>,
	David Rientjes <rientjes@...gle.com>,
	Oleg Nesterov <oleg@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
	Linux PM list <linux-pm@...r.kernel.org>
Subject: Re: [PATCH 3/4] OOM, PM: OOM killed task shouldn't escape PM suspend

On Wed 05-11-14 08:02:47, Tejun Heo wrote:
> Hello, Michal.
> 
> On Wed, Nov 05, 2014 at 01:46:20PM +0100, Michal Hocko wrote:
> > As I've said I wasn't entirely happy with this half solution but it helped
> > the current situation at the time. The full solution would require to
> 
> I don't think this helps the situation.  It just makes the bug more
> obscure and the race window while reduced is still pretty big and
> there seems to be an actual not too low chance of the bug triggering
> out in the wild.  How does this level of obscuring help anything?  In
> addition to making the bug more difficult to reproduce, it also adds a
> bunch of code which *pretends* to address the issue but ultimately
> just lowers visibility into what's going on and hinders tracking down
> the issue when something actually goes wrong.  This is *NOT* making
> the situation better.  The patch is net negative.

The patch was a compromise. It was needed to catch the most common
OOM conditions while the tasks are getting frozen. The race window
between the counter increment and the check in the PM path is negligible
compared to the freezing process. And it is safe from OOM point of view
because nothing can block it away.

> > I think the patch below should be safe. Would you prefer this solution
> > instead? It is race free but there is the risk that exposing a lock which
> 
> Yes, this is an a lot saner approach in general.
> 
> > completely blocks OOM killer from the allocation path will kick us
> > later.
> 
> Can you please spell it out?  How would it kick us?  We already have
> oom_killer_disable/enable(), how is this any different in terms of
> correctness from them? 

As already said in the part of email you haven't quoted.
oom_killer_disable will cause allocations to _fail_. With the lock you are
_blocking_ OOM killer completely. This is error prone because no part of
system should be able to block the last resort memory shortage actions.

> Also, why isn't this part of
> oom_killer_disable/enable()?  The way they're implemented is really
> silly now.  It just sets a flag and returns whether there's a
> currently running instance or not.  How were these even useful? 
> Why can't you just make disable/enable to what they were supposed to
> do from the beginning?

Because then we would block all the potential allocators coming from
workqueues or kernel threads which are not frozen yet rather than fail
the allocation. I am not familiar with the PM code and all the paths
this might get called from enough to tell whether failing the allocation
is better approach than failing the suspend operation on a timeout.
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ