lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Nov 2013 09:14:51 -0800
From:	Luigi Semenzato <semenzato@...gle.com>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	Greg Thelen <gthelen@...gle.com>,
	Glauber Costa <glommer@...il.com>,
	Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>,
	Joern Engel <joern@...fs.org>, Hugh Dickins <hughd@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: user defined OOM policies

On Wed, Nov 20, 2013 at 7:22 AM, Michal Hocko <mhocko@...e.cz> wrote:
> On Wed 20-11-13 00:02:20, David Rientjes wrote:
>> On Tue, 19 Nov 2013, Michal Hocko wrote:
>>
>> > > We have basically ended up with 3 options AFAIR:
>> > >   1) allow memcg approach (memcg.oom_control) on the root level
>> > >            for both OOM notification and blocking OOM killer and handle
>> > >            the situation from the userspace same as we can for other
>> > >      memcgs.
>> >
>> > This looks like a straightforward approach as the similar thing is done
>> > on the local (memcg) level. There are several problems though.
>> > Running userspace from within OOM context is terribly hard to do
>> > right.
>>
>> Not sure it's hard if you have per-memcg memory reserves which I've
>> brought up in the past with true and complete kmem accounting.  Even if
>> you don't allocate slab, it guarantees that there will be at least a
>> little excess memory available so that the userspace oom handler isn't oom
>> itself.
>> This involves treating processes waiting on memory.oom_control to be
>> treated as a special class
>
> How do you identify such a process?
>
>> so that they are allowed to allocate an
>> additional pre-configured amount of memory.  For non-root memcgs, this
>> would simply be a dummy usage that would be charged to the memcg when the
>> oom notification is registered and actually accessible only by the oom
>> handler itself while memcg->under_oom.  For root memcgs, this would simply
>> be a PF_MEMALLOC type behavior that dips into per-zone memory reserves.
>>
>> > This is true even in the memcg case and we strongly discurage
>> > users from doing that. The global case has nothing like outside of OOM
>> > context though. So any hang would blocking the whole machine.
>>
>> Why would there be a hang if the userspace oom handlers aren't actually
>> oom themselves as described above?
>
> Because all the reserves might be depleted.
>
>> I'd suggest against the other two suggestions because hierarchical
>> per-memcg userspace oom handlers are very powerful and can be useful
>> without actually killing anything at all, and parent oom handlers can
>> signal child oom handlers to free memory in oom conditions (in other
>> words, defer a parent oom condition to a child's oom handler upon
>> notification).
>
> OK, but what about those who are not using memcg and need a similar
> functionality? Are there any, btw?

Chrome OS uses a custom low-memory notification to minimize OOM kills.
 When the notifier triggers, the Chrome browser tries to free memory,
including by shutting down processes, before the full OOM occurs.  But
OOM kills cannot always be avoided, depending on the speed of
allocation and how much CPU the freeing tasks are able to use
(certainly they could be given higher priority, but it get complex).

We may end up using memcg so we can use the cgroup
memory.pressure_level file instead of our own notifier, but we have no
need for finer control over OOM kills beyond the very useful kill
priority.  One process at a time is good enough for us.

>
>> I was planning on writing a liboom library that would lay
>> the foundation for how this was supposed to work and some generic
>> functions that make use of the per-memcg memory reserves.
>>
>> So my plan for the complete solution was:
>>
>>  - allow userspace notification from the root memcg on system oom
>>    conditions,
>>
>>  - implement a memory.oom_delay_millisecs timeout so that the kernel
>>    eventually intervenes if userspace fails to respond, including for
>>    system oom conditions, for whatever reason which would be set to 0
>>    if no userspace oom handler is registered for the notification, and
>
> One thing I really dislike about timeout is that there is no easy way to
> find out which value is safe. It might be easier for well controlled
> environments where you know what the load is and how it behaves. How an
> ordinary user knows which number to put there without risking a race
> where the userspace just doesn't respond in time?
>
>>  - implement per-memcg reserves as described above so that userspace oom
>>    handlers have access to memory even in oom conditions as an upfront
>>    charge and have the ability to free memory as necessary.
>
> This has a similar issue as above. How to estimate the size of the
> reserve? How to make such a reserve stable over different kernel
> versions where the same query might consume more memory.
>
> As I've said in my previous email. The reserves can help but it is still
> easy to do wrong and looks rather fragile for general purposes.
>
>> We already have the ability to do the actual kill from userspace, both the
>> system oom killer and the memcg oom killer grants access to memory
>> reserves for any process needing to allocate memory if it has a pending
>> SIGKILL which we can send from userspace.
>
> Yes, the killing part is not a problem the selection is the hard one.
>
> --
> Michal Hocko
> SUSE Labs
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ