[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <712a319f-c9da-230a-f2cb-af980daff704@i-love.sakura.ne.jp>
Date: Thu, 2 Aug 2018 20:53:14 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: Michal Hocko <mhocko@...nel.org>
Cc: Roman Gushchin <guro@...com>, linux-mm@...ck.org,
Johannes Weiner <hannes@...xchg.org>,
David Rientjes <rientjes@...gle.com>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] mm, oom: introduce memory.oom.group
On 2018/08/02 20:21, Michal Hocko wrote:
> On Thu 02-08-18 19:53:13, Tetsuo Handa wrote:
>> On 2018/08/02 9:32, Roman Gushchin wrote:
> [...]
>>> +struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
>>> + struct mem_cgroup *oom_domain)
>>> +{
>>> + struct mem_cgroup *oom_group = NULL;
>>> + struct mem_cgroup *memcg;
>>> +
>>> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
>>> + return NULL;
>>> +
>>> + if (!oom_domain)
>>> + oom_domain = root_mem_cgroup;
>>> +
>>> + rcu_read_lock();
>>> +
>>> + memcg = mem_cgroup_from_task(victim);
>>
>> Isn't this racy? I guess that memcg of this "victim" can change to
>> somewhere else from the one as of determining the final candidate.
>
> How is this any different from the existing code? We select a victim and
> then kill it. The victim might move away and won't be part of the oom
> memcg anymore but we will still kill it. I do not remember this ever
> being a problem. Migration is a privileged operation. If you loose this
> restriction you shouldn't allow to move outside of the oom domain.
The existing code kills one process (plus other processes sharing mm if any).
But oom_cgroup kills multiple processes. Thus, whether we made decision based
on correct memcg becomes important.
>
>> This "victim" might have already passed exit_mm()/cgroup_exit() from do_exit().
>
> Why does this matter? The victim hasn't been killed yet so if it exists
> by its own I do not think we really have to tear the whole cgroup down.
The existing code does not send SIGKILL if find_lock_task_mm() failed. Who can
guarantee that the victim is not inside do_exit() yet when this code is executed?
Powered by blists - more mailing lists