lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Sep 2015 14:11:36 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
cc:	mhocko@...nel.org, oleg@...hat.com, kwalker@...hat.com,
	cl@...ux.com, Andrew Morton <akpm@...ux-foundation.org>,
	hannes@...xchg.org, vdavydov@...allels.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, skozina@...hat.com
Subject: Re: can't oom-kill zap the victim's memory?

On Wed, 30 Sep 2015, Tetsuo Handa wrote:

> If we choose only 1 OOM victim, the possibility of hitting this memory
> unmapping livelock is (say) 1%. But if we choose multiple OOM victims, the
> possibility becomes (almost) 0%. And if we still hit this livelock even
> after choosing many OOM victims, it is time to call panic().
> 

Again, this is a fundamental disagreement between your approach of 
randomly killing processes hoping that we target one that can make a quick 
exit vs. my approach where we give threads access to memory reserves after 
reclaim has failed in an oom livelock so they at least make forward 
progress.  We're going around in circles.

> (Well, do we need to change __alloc_pages_slowpath() that OOM victims do not
> enter direct reclaim paths in order to avoid being blocked by unkillable fs
> locks?)
> 

OOM victims shouldn't need to enter reclaim, and there have been patches 
before to abort reclaim if current has a pending SIGKILL, if they have 
access to memory reserves.  Nothing prevents the victim from already being 
in reclaim, however, when it is killed.

> > Perhaps this is an argument that we need to provide access to memory 
> > reserves for threads even for !__GFP_WAIT and !__GFP_FS in such scenarios, 
> > but I would wait to make that extension until we see it in practice.
> 
> I think that GFP_ATOMIC allocations already access memory reserves via
> ALLOC_HIGH priority.
> 

Yes, that's true.  It doesn't help for GFP_NOFS, however.  It may be 
possible that GFP_ATOMIC reserves have been depleted or there is a 
GFP_NOFS allocation that gets stuck looping forever that doesn't get the 
ability to allocate without watermarks.  I'd wait to see it in practice 
before making this extension since it relies on scanning the tasklist.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ