[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170914134014.wqemev2kgychv7m5@dhcp22.suse.cz>
Date: Thu, 14 Sep 2017 15:40:14 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [v8 0/4] cgroup-aware OOM killer
On Wed 13-09-17 14:56:07, Roman Gushchin wrote:
> On Wed, Sep 13, 2017 at 02:29:14PM +0200, Michal Hocko wrote:
[...]
> > I strongly believe that comparing only leaf memcgs
> > is more straightforward and it doesn't lead to unexpected results as
> > mentioned before (kill a small memcg which is a part of the larger
> > sub-hierarchy).
>
> One of two main goals of this patchset is to introduce cgroup-level
> fairness: bigger cgroups should be affected more than smaller,
> despite the size of tasks inside. I believe the same principle
> should be used for cgroups.
Yes bigger cgroups should be preferred but I fail to see why bigger
hierarchies should be considered as well if they are not kill-all. And
whether non-leaf memcgs should allow kill-all is not entirely clear to
me. What would be the usecase?
Consider that it might be not your choice (as a user) how deep is your
leaf memcg. I can already see how people complain that their memcg has
been killed just because it was one level deeper in the hierarchy...
I would really start simple and only allow kill-all on leaf memcgs and
only compare leaf memcgs & root. If we ever need to kill whole
hierarchies then allow kill-all on intermediate memcgs as well and then
consider cumulative consumptions only on those that have kill-all
enabled.
Or do I miss any reasonable usecase that would suffer from such a
semantic?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists