[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170925202442.lmcmvqwy2jj2tr5h@dhcp22.suse.cz>
Date: Mon, 25 Sep 2017 22:25:21 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: Johannes Weiner <hannes@...xchg.org>, Tejun Heo <tj@...nel.org>,
kernel-team@...com, David Rientjes <rientjes@...gle.com>,
linux-mm@...ck.org, Vladimir Davydov <vdavydov.dev@...il.com>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrew Morton <akpm@...ux-foundation.org>,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [v8 0/4] cgroup-aware OOM killer
On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
[...]
> I'm not against this model, as I've said before. It feels logical,
> and will work fine in most cases.
>
> In this case we can drop any mount/boot options, because it preserves
> the existing behavior in the default configuration. A big advantage.
I am not sure about this. We still need an opt-in, ragardless, because
selecting the largest process from the largest memcg != selecting the
largest task (just consider memcgs with many processes example).
> The only thing, I'm slightly concerned, that due to the way how we calculate
> the memory footprint for tasks and memory cgroups, we will have a number
> of weird edge cases. For instance, when putting a single process into
> the group_oom memcg will alter the oom_score significantly and result
> in significantly different chances to be killed. An obvious example will
> be a task with oom_score_adj set to any non-extreme (other than 0 and -1000)
> value, but it can also happen in case of constrained alloc, for instance.
I am not sure I understand. Are you talking about root memcg comparing
to other memcgs?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists