[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180802080041.GB10808@dhcp22.suse.cz>
Date: Thu, 2 Aug 2018 10:00:41 +0200
From: Michal Hocko <mhocko@...nel.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Roman Gushchin <guro@...com>, linux-mm@...ck.org,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] introduce memory.oom.group
On Wed 01-08-18 14:51:25, David Rientjes wrote:
> On Tue, 31 Jul 2018, Roman Gushchin wrote:
>
> > > What's the plan with the cgroup aware oom killer? It has been sitting in
> > > the -mm tree for ages with no clear path to being merged.
> >
> > It's because your nack, isn't it?
> > Everybody else seem to be fine with it.
> >
>
> If they are fine with it, I'm not sure they have tested it :) Killing
> entire cgroups needlessly for mempolicy oom kills that will not free
> memory on target nodes is the first regression they may notice.
I do not remember you would be mentioning this previously. Anyway the
older implementation has considered the nodemask in memcg_oom_badness.
You are right that a cpuset allocation could needlessly select a memcg
with small or no memory from the target nodemask which is something I
could have noticed during the review. If only I didn't have to spend all
my energy to go through repetitive arguments of yours. Anyway this would
be quite trivial to resolve in the same function by checking
node_isset(node, current->mems_allowed).
Thanks for your productive feedback again.
Skipping the rest which is yet again repeating same arguments and it
doesn't add anything new to the table.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists