[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130604095514.GC31242@dhcp22.suse.cz>
Date: Tue, 4 Jun 2013 11:55:14 +0200
From: Michal Hocko <mhocko@...e.cz>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org
Subject: Re: [patch] mm, memcg: add oom killer delay
On Mon 03-06-13 14:17:54, David Rientjes wrote:
> On Mon, 3 Jun 2013, Michal Hocko wrote:
>
> > > What do you suggest when you read the "tasks" file and it returns -ENOMEM
> > > because kmalloc() fails because the userspace oom handler's memcg is also
> > > oom?
> >
> > That would require that you track kernel allocations which is currently
> > done only for explicit caches.
> >
>
> That will not always be the case, and I think this could be a prerequisite
> patch for such support that we have internally.
> I'm not sure a userspace oom notifier would want to keep a
> preallocated buffer around that is mlocked in memory for all possible
> lengths of this file.
Well, an oom handler which allocates memory under the same restricted
memory doesn't make much sense to me. Tracking all kmem allocations
makes it almost impossible to implement a non-trivial handler.
> > > Obviously it's not a situation we want to get into, but unless you
> > > know that handler's exact memory usage across multiple versions, nothing
> > > else is sharing that memcg, and it's a perfect implementation, you can't
> > > guarantee it. We need to address real world problems that occur in
> > > practice.
> >
> > If you really need to have such a guarantee then you can have a _global_
> > watchdog observing oom_control of all groups that provide such a vague
> > requirements for oom user handlers.
> >
>
> The whole point is to allow the user to implement their own oom policy.
OK, maybe I just wasn't clear enough or I am missing your point. Your
users _can_ implement and register their oom handlers. But as your
requirements are rather benevolent for handlers implementation you would
have a global watchdog which would sit on the oom_control of those
groups (which are allowed to have own handlers - all of them in your
case I guess) and trigger (user defined/global) timeout when it gets a
notification. If the group was under oom always during the timeout then
just disable oom_control until oom is settled (under_oom is 0).
Why wouldn't something like this work for your use case?
> If the policy was completely encapsulated in kernel code, we don't need to
> ever disable the oom killer even with memory.oom_control. Users may
> choose to kill the largest process, the newest process, the oldest
> process, sacrifice children instead of parents, prevent forkbombs,
> implement their own priority scoring (which is what we do), kill the
> allocating task, etc.
>
> To not merge this patch, I'd ask that you show an alternative that allows
> users to implement their own userspace oom handlers and not require admin
> intervention when things go wrong.
Hohmm, so you are insisting on something that can be implemented in the
userspace and put it into the kernel because it is more convenient for
you and your use case. This doesn't sound like a way for accepting a
feature.
To make this absolutely clear. I do understand your requirements but you
haven't shown any _argument_ why the timeout you are proposing cannot be
implemented in the userspace. I will not ack this without this
reasoning.
And yes we should make memcg oom handling less deadlock prone and
Johannes' work in this thread is a good step forward.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists