[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171005111230.i7am3patptvalcat@dhcp22.suse.cz>
Date: Thu, 5 Oct 2017 13:12:30 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: Shakeel Butt <shakeelb@...gle.com>, Linux MM <linux-mm@...ck.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
Cgroups <cgroups@...r.kernel.org>, linux-doc@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [v10 3/6] mm, oom: cgroup-aware OOM killer
On Thu 05-10-17 11:27:07, Roman Gushchin wrote:
> On Wed, Oct 04, 2017 at 02:24:26PM -0700, Shakeel Butt wrote:
[...]
> > Sorry about the confusion. There are two things. First, should we do a
> > css_get on the newly selected memcg within the for loop when we still
> > have a reference to it?
>
> We're holding rcu_read_lock, it should be enough. We're bumping css counter
> just before releasing rcu lock.
yes
> >
> > Second, for the OFFLINE memcg, you are right oom_evaluate_memcg() will
> > return 0 for offlined memcgs. Maybe no need to call
> > oom_evaluate_memcg() for offlined memcgs.
>
> Sounds like a good optimization, which can be done on top of the current
> patchset.
You could achive this by checking whether a memcg has tasks rather than
explicitly checking for children memcgs as I've suggested already.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists