lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180315171039.GB1853@castle.DHCP.thefacebook.com>
Date:   Thu, 15 Mar 2018 17:10:41 +0000
From:   Roman Gushchin <guro@...com>
To:     David Rientjes <rientjes@...gle.com>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Tejun Heo <tj@...nel.org>, <cgroups@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [patch -mm v3 1/3] mm, memcg: introduce per-memcg oom policy
 tunable

Hello, David!

On Wed, Mar 14, 2018 at 01:58:59PM -0700, David Rientjes wrote:
> On Wed, 14 Mar 2018, Roman Gushchin wrote:
>  - Does not lock the entire system into a single methodology.  Users
>    working in a subtree can default to what they are used to: per-process
>    oom selection even though their subtree might be targeted by a system
>    policy level decision at the root.  This allow them flexibility to
>    organize their subtree intuitively for use with other controllers in a
>    single hierarchy.
> 
>    The real-world example is a user who currently organizes their subtree
>    for this purpose and has defined oom_score_adj appropriately and now
>    regresses if the admin mounts with the needless "groupoom" option.

I find this extremely confusing.

The problem is that OOM policy defines independently how the OOM
of the corresponding scope is handled, not like how it prefers
to handle OOMs from above.

As I've said, if you're inside a container, you can have OOMs
of different types, depending on settings, which you don't even know about.
Sometimes oom_score_adj works, sometimes not.
Sometimes all processes are killed, sometimes not.
IMO, this adds nothing but mess.

The mount option (which I'm not a big fan of too) was added only
to provide a 100% backward compatibility, what was forced by Michal.
But I doubt that mixing per-process and per-cgroup approach
makes any sense.

> 
>  - Allows changing the oom policy at runtime without remounting the entire
>    cgroup fs.  Depending on how cgroups are going to be used, per-process 
>    vs cgroup-aware may be mandated separately.  This is a trait only of
>    the mem cgroup controller, the root level oom policy is no different
>    from the subtree and depends directly on how the subtree is organized.
>    If other controllers are already being used, requiring a remount to
>    change the system-wide oom policy is an unnecessary burden.
> 
>    The real-world example is systems software that either supports user
>    subtrees or strictly subtrees that it maintains itself.  While other
>    controllers are used, the mem cgroup oom policy can be changed at
>    runtime rather than requiring a remount and reorganizing other
>    controllers exactly as before.

Btw, what the problem with remounting? You don't have to re-create cgroups,
or something like this; the operation is as trivial as adding a flag.

> 
>  - Can be extended to cgroup v1 if necessary.  There is no need for a
>    new cgroup v1 mount option and mem cgroup oom selection is not
>    dependant on any functionality provided by cgroup v2.  The policies
>    introduced here work exactly the same if used with cgroup v1.
> 
>    The real-world example is a cgroup configuration that hasn't had
>    the ability to move to cgroup v2 yet and still would like to use
>    cgroup-aware oom selection with a very trivial change to add the
>    memory.oom_policy file to the cgroup v1 filesystem.

I assume that v1 interface is frozen.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ