lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Aug 2015 14:41:23 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	Johannes Weiner <hannes@...xchg.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Subject: Re: [patch -mm] mm, oom: add global access to memory reserves on
 livelock

On Wed 26-08-15 15:23:07, David Rientjes wrote:
> On Wed, 26 Aug 2015, Michal Hocko wrote:
> 
> > > Because the company I work for has far too many machines for that to be 
> > > possible.
> > 
> > OK I can see that manual intervention for hundreds of machines is not
> > practical. But not everybody is that large and there are users who might
> > want to be be able to recover.
> >  
> 
> If Andrew would prefer moving in a direction where all Linux users are 
> required to have their admin use sysrq+f to manually trigger an oom kill, 
> which may or may not resolve the livelock since there's no way to 
> determine which process is holding the common mutex (or even which 
> processes are currently allocating), in such situations, then we can carry 
> this patch internally.  I disagree with that solution for upstream Linux.

There are other possibilities than the manual sysrq intervention. E.g.
the already mentioned oom_{panic,reboot}_timeout which has a little
advantage that it allows admin to opt in into the policy rather than
having it hard coded into the kernel.
 
> > > If there is a holder of a mutex that then allocates gigabytes of memory, 
> > > no amount of memory reserves is going to assist in resolving an oom killer 
> > > livelock, whether that's partial access to memory reserves or full access 
> > > to memory reserves.
> > 
> > Sure, but do we have something like that in the kernel? I would argue it
> > would be terribly broken and a clear bug which should be fixed.
> > 
> 
> This is also why my patch dumps the stack trace of both threads: so we can 
> evaluate the memory allocation of threads holding shared mutexes.  If it 
> is excessive, we can report that and show that it is a common offender and 
> see if we can mitigate that.
> 
> The scenario described, the full or partial depletion of memory reserves, 
> does not need to be induced by a single user.  We don't control the order 
> in which the mutex is grabbed so it's multipled by the number of threads 
> that grab it, allocate memory, and drop it before the victim has a chance 
> to grab it.  In the past, the oom killer would also increase the 
> scheduling priority of a victim so it tried to resolve issues like this 
> faster.

> > > Unless the oom watermark was higher than the lowest access to memory 
> > > reserves other than ALLOC_NO_WATERMARKS, then no forward progress would be 
> > > made in this scenario.  I think it would be better to give access to that 
> > > crucial last page that may solve the livelock to make forward progress, or 
> > > panic as a result of complete depletion of memory reserves.  That panic() 
> > > is a very trivial patch that can be checked in the allocator slowpath and 
> > > addresses a problem that already exists today.
> > 
> > The panicing the system is really trivial, no question about that. The
> > question is whether that panic would be premature. And that is what
> > I've tried to tell you.
> 
> My patch has defined that by OOM_EXPIRE_MSECS.  The premise is that an oom 
> victim with full access to memory reserves should never take more than 5s 
> to exit, which I consider a very long time.  If it's increased, we see 
> userspace responsiveness issues with our processes that monitor system 
> health which timeout.

Yes but it sounds very much like a policy which should better be defined
from the userspace because different users might have different
preferences.

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ