lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Sep 2017 09:37:44 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Johannes Weiner <hannes@...xchg.org>, Roman Gushchin <guro@...com>,
        Tejun Heo <tj@...nel.org>, kernel-team@...com,
        linux-mm@...ck.org, Vladimir Davydov <vdavydov.dev@...il.com>,
        Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Andrew Morton <akpm@...ux-foundation.org>,
        cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v8 0/4] cgroup-aware OOM killer

On Tue 26-09-17 14:04:41, David Rientjes wrote:
> On Tue, 26 Sep 2017, Michal Hocko wrote:
> 
> > > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > > different criteria depending on whether group_oom is set or not.
> > > 
> > > I think it would be better to compare siblings based on the same criteria 
> > > independent of group_oom if the user has mounted the hierarchy with the 
> > > new mode (I think we all agree that the mount option is needed).  It's 
> > > very easy to describe to the user and the selection is simple to 
> > > understand. 
> > 
> > I disagree. Just take the most simplistic example when cgroups reflect
> > some other higher level organization - e.g. school with teachers,
> > students and admins as the top level cgroups to control the proper cpu
> > share load. Now you want to have a fair OOM selection between different
> > entities. Do you consider selecting students all the time as an expected
> > behavior just because their are the largest group? This just doesn't
> > make any sense to me.
> > 
> 
> Are you referring to this?
> 
> 	root
>        /    \
> students    admins
> /      \    /    \
> A      B    C    D
> 
> If the cumulative usage of all students exceeds the cumulative usage of 
> all admins, yes, the choice is to kill from the /students tree.

Which is wrong IMHO because the number of stutends is likely much more
larger than admins (or teachers) yet it might be the admins one to run
away. This example simply shows how comparing siblinks highly depends
on the way you organize the hierarchy rather than the actual memory
consumer runaways which is the primary goal of the OOM killer to handle.

> This has been Roman's design from the very beginning.

I suspect this was the case because deeper hierarchies for
organizational purposes haven't been considered.

> If the preference is to kill 
> the single largest process, which may be attached to either subtree, you 
> would not have opted-in to the new heuristic.

I believe you are making a wrong assumption here. The container cleanup
is sound reason to opt in and deeper hierarchies are simply required in
the cgroup v2 world where you do not have separate hierarchies.
 
> > > Then, once a cgroup has been chosen as the victim cgroup, 
> > > kill the process with the highest badness, allowing the user to influence 
> > > that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
> > > otherwise, kill all eligible processes if enabled.
> > 
> > And now, what should be the semantic of group_oom on an intermediate
> > (non-leaf) memcg? Why should we compare it to other killable entities?
> > Roman was mentioning a setup where a _single_ workload consists of a
> > deeper hierarchy which has to be shut down at once. It absolutely makes
> > sense to consider the cumulative memory of that hierarchy when we are
> > going to kill it all.
> > 
> 
> If group_oom is enabled on an intermediate memcg, I think the intuitive 
> way to handle it would be that all descendants are also implicitly or 
> explicitly group_oom.

This is an interesting point. I would tend to agree here. If somebody
requires all-in clean up up the hierarchy it feels strange that a
subtree would disagree (e.g. during memcg oom on the subtree). I can
hardly see a usecase that would really need a different group_oom policy
depending on where in the hierarchy the oom happened to be honest.
Roman?

> It is compared to sibling cgroups based on 
> cumulative usage at the time of oom and the largest is chosen and 
> iterated.  The point is to separate out the selection heuristic (policy) 
> from group_oom (mechanism) so that we don't bias or prefer subtrees based 
> on group_oom, which makes this much more complex.

I disagree. group_oom determines killable entity and making a decision
based on a non-killable entities is weird as already pointed out.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ