[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171004195110.GA18900@castle>
Date: Wed, 4 Oct 2017 20:51:10 +0100
From: Roman Gushchin <guro@...com>
To: Johannes Weiner <hannes@...xchg.org>
CC: <linux-mm@...ck.org>, Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, <kernel-team@...com>,
<cgroups@...r.kernel.org>, <linux-doc@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [v10 3/6] mm, oom: cgroup-aware OOM killer
On Wed, Oct 04, 2017 at 03:27:20PM -0400, Johannes Weiner wrote:
> On Wed, Oct 04, 2017 at 04:46:35PM +0100, Roman Gushchin wrote:
> > Traditionally, the OOM killer is operating on a process level.
> > Under oom conditions, it finds a process with the highest oom score
> > and kills it.
> >
> > This behavior doesn't suit well the system with many running
> > containers:
> >
> > 1) There is no fairness between containers. A small container with
> > few large processes will be chosen over a large one with huge
> > number of small processes.
> >
> > 2) Containers often do not expect that some random process inside
> > will be killed. In many cases much safer behavior is to kill
> > all tasks in the container. Traditionally, this was implemented
> > in userspace, but doing it in the kernel has some advantages,
> > especially in a case of a system-wide OOM.
> >
> > To address these issues, the cgroup-aware OOM killer is introduced.
> >
> > Under OOM conditions, it looks for the biggest leaf memory cgroup
> > and kills the biggest task belonging to it. The following patches
> > will extend this functionality to consider non-leaf memory cgroups
> > as well, and also provide an ability to kill all tasks belonging
> > to the victim cgroup.
> >
> > The root cgroup is treated as a leaf memory cgroup, so it's score
> > is compared with leaf memory cgroups.
> > Due to memcg statistics implementation a special algorithm
> > is used for estimating it's oom_score: we define it as maximum
> > oom_score of the belonging tasks.
> >
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Cc: Michal Hocko <mhocko@...nel.org>
> > Cc: Vladimir Davydov <vdavydov.dev@...il.com>
> > Cc: Johannes Weiner <hannes@...xchg.org>
> > Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> > Cc: David Rientjes <rientjes@...gle.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Tejun Heo <tj@...nel.org>
> > Cc: kernel-team@...com
> > Cc: cgroups@...r.kernel.org
> > Cc: linux-doc@...r.kernel.org
> > Cc: linux-kernel@...r.kernel.org
> > Cc: linux-mm@...ck.org
>
> This looks good to me.
>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
>
> I just have one question:
>
> > @@ -828,6 +828,12 @@ static void __oom_kill_process(struct task_struct *victim)
> > struct mm_struct *mm;
> > bool can_oom_reap = true;
> >
> > + if (is_global_init(victim) || (victim->flags & PF_KTHREAD) ||
> > + victim->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) {
> > + put_task_struct(victim);
> > + return;
> > + }
> > +
> > p = find_lock_task_mm(victim);
> > if (!p) {
> > put_task_struct(victim);
>
> Is this necessary? The callers of this function use oom_badness() to
> find a victim, and that filters init, kthread, OOM_SCORE_ADJ_MIN.
It is. __oom_kill_process() is used to kill all processes belonging
to the selected memory cgroup, so we should perform these checks
to avoid killing unkillable processes.
Thanks!
Powered by blists - more mailing lists