lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1709251510430.15961@chino.kir.corp.google.com>
Date:   Mon, 25 Sep 2017 15:21:03 -0700 (PDT)
From:   David Rientjes <rientjes@...gle.com>
To:     Johannes Weiner <hannes@...xchg.org>
cc:     Michal Hocko <mhocko@...nel.org>, Roman Gushchin <guro@...com>,
        Tejun Heo <tj@...nel.org>, kernel-team@...com,
        linux-mm@...ck.org, Vladimir Davydov <vdavydov.dev@...il.com>,
        Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Andrew Morton <akpm@...ux-foundation.org>,
        cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v8 0/4] cgroup-aware OOM killer

On Mon, 25 Sep 2017, Johannes Weiner wrote:

> > True but we want to have the semantic reasonably understandable. And it
> > is quite hard to explain that the oom killer hasn't selected the largest
> > memcg just because it happened to be in a deeper hierarchy which has
> > been configured to cover a different resource.
> 
> Going back to Michal's example, say the user configured the following:
> 
>        root
>       /    \
>      A      D
>     / \
>    B   C
> 
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
> 
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
> 
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
> 
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
> 

No, I agree that we shouldn't compare sibling memory cgroups based on 
different criteria depending on whether group_oom is set or not.

I think it would be better to compare siblings based on the same criteria 
independent of group_oom if the user has mounted the hierarchy with the 
new mode (I think we all agree that the mount option is needed).  It's 
very easy to describe to the user and the selection is simple to 
understand.  Then, once a cgroup has been chosen as the victim cgroup, 
kill the process with the highest badness, allowing the user to influence 
that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
otherwise, kill all eligible processes if enabled.

That, to me, is a very clear semantic and I believe it addresses Roman's 
usecase.  My desire to have oom priorities amongst siblings is so that 
userspace can influence which cgroup is chosen, just as it can influence 
which process is chosen.

I see group_oom as a mechanism to be used when victim selection has 
already been done instead of something that should be considered in the 
policy of victim selection.

> Let's consider an extreme case of this conundrum:
> 
> 	root
>       /     \
>      A       B
>     /|\      |
>  A1-A1000    B1
> 
> Again we find:
> - A > B
> - A1 to A1000 and B1 are oomgroups
> But:
> - A1 to A1000 individually are tiny, B1 is huge
> 
> Going level by level, we'd pick A as the bigger hierarchy in the
> system, and then kill off one of the tiny groups A1 to A1000.
> 
> Conversely, going for biggest consumer regardless of hierarchy, we'd
> compare A1 to A1000 and B1, then pick B1 as the biggest single atomic
> memory consumer in the system and kill all its tasks.
> 

If we compare sibling memcgs independent of group_oom, we don't 
necessarily pick A unless it really is larger than B.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ