[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170726135622.GS2981@dhcp22.suse.cz>
Date: Wed, 26 Jul 2017 15:56:22 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Roman Gushchin <guro@...com>
Cc: linux-mm@...ck.org, Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Tejun Heo <tj@...nel.org>, kernel-team@...com,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [v4 1/4] mm, oom: refactor the TIF_MEMDIE usage
On Wed 26-07-17 14:27:15, Roman Gushchin wrote:
[...]
> @@ -656,13 +658,24 @@ static void mark_oom_victim(struct task_struct *tsk)
> struct mm_struct *mm = tsk->mm;
>
> WARN_ON(oom_killer_disabled);
> - /* OOM killer might race with memcg OOM */
> - if (test_and_set_tsk_thread_flag(tsk, TIF_MEMDIE))
> +
> + if (!cmpxchg(&tif_memdie_owner, NULL, current)) {
> + struct task_struct *t;
> +
> + rcu_read_lock();
> + for_each_thread(current, t)
> + set_tsk_thread_flag(t, TIF_MEMDIE);
> + rcu_read_unlock();
> + }
I would realy much rather see we limit the amount of memory reserves oom
victims can consume rather than build on top of the current hackish
approach of limiting the number of tasks because the fundamental problem
is still there (a heavy multithreaded process can still deplete the
reserves completely).
Is there really any reason to not go with the existing patch I've
pointed to the last time around? You didn't seem to have any objects
back then.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists