[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5ee34fc6-1485-34f8-8790-903ddabaa809@i-love.sakura.ne.jp>
Date: Tue, 29 Jan 2019 19:34:00 +0900
From: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...nel.org>,
Arkadiusz Miśkiewicz <a.miskiewicz@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, cgroups@...r.kernel.org,
Aleksa Sarai <asarai@...e.de>, Jay Kamat <jgkamat@...com>,
Roman Gushchin <guro@...com>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v3] oom, oom_reaper: do not enqueue same task twice
Johannes Weiner wrote:
> On Sun, Jan 27, 2019 at 11:57:38PM +0900, Tetsuo Handa wrote:
> > This bug existed since the OOM reaper became invokable from
> > task_will_free_mem(current) path in out_of_memory() in Linux 4.7,
> > but memcg's group oom killing made it easier to trigger this bug by
> > calling wake_oom_reaper() on the same task from one out_of_memory()
> > request.
>
> This changelog seems a little terse compared to how tricky this is.
>
> Can you please include an explanation here *how* this bug is possible?
> I.e. the race condition that causes the function te be entered twice
> and the existing re-entrance check in there to fail.
OK. Here is an updated patch. Only changelog part has changed.
I hope this will provide enough information to stable kernel maintainers.
----------
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Subject: oom, oom_reaper: do not enqueue same task twice
Arkadiusz reported that enabling memcg's group oom killing causes strange
memcg statistics where there is no task in a memcg despite the number of
tasks in that memcg is not 0. It turned out that there is a bug in
wake_oom_reaper() which allows enqueuing same task twice which makes
impossible to decrease the number of tasks in that memcg due to a refcount
leak.
This bug existed since the OOM reaper became invokable from
task_will_free_mem(current) path in out_of_memory() in Linux 4.7,
T1@P1 |T2@P1 |T3@P1 |OOM reaper
----------+----------+----------+------------
# Processing an OOM victim in a different memcg domain.
try_charge()
mem_cgroup_out_of_memory()
mutex_lock(&oom_lock)
try_charge()
mem_cgroup_out_of_memory()
mutex_lock(&oom_lock)
try_charge()
mem_cgroup_out_of_memory()
mutex_lock(&oom_lock)
out_of_memory()
oom_kill_process(P1)
do_send_sig_info(SIGKILL, @P1)
mark_oom_victim(T1@P1)
wake_oom_reaper(T1@P1) # T1@P1 is enqueued.
mutex_unlock(&oom_lock)
out_of_memory()
mark_oom_victim(T2@P1)
wake_oom_reaper(T2@P1) # T2@P1 is enqueued.
mutex_unlock(&oom_lock)
out_of_memory()
mark_oom_victim(T1@P1)
wake_oom_reaper(T1@P1) # T1@P1 is enqueued again due to oom_reaper_list == T2@P1 && T1@P1->oom_reaper_list == NULL.
mutex_unlock(&oom_lock)
# Completed processing an OOM victim in a different memcg domain.
spin_lock(&oom_reaper_lock)
# T1P1 is dequeued.
spin_unlock(&oom_reaper_lock)
but memcg's group oom killing made it easier to trigger this bug by
calling wake_oom_reaper() on the same task from one out_of_memory()
request.
Fix this bug using an approach used by commit 855b018325737f76 ("oom,
oom_reaper: disable oom_reaper for oom_kill_allocating_task"). As a side
effect of this patch, this patch also avoids enqueuing multiple threads
sharing memory via task_will_free_mem(current) path.
Fixes: af8e15cc85a25315 ("oom, oom_reaper: do not enqueue task if it is on the oom_reaper_list head")
Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Reported-by: Arkadiusz Miśkiewicz <arekm@...en.pl>
Tested-by: Arkadiusz Miśkiewicz <arekm@...en.pl>
Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Roman Gushchin <guro@...com>
Cc: Tejun Heo <tj@...nel.org>
Cc: Aleksa Sarai <asarai@...e.de>
Cc: Jay Kamat <jgkamat@...com>
Cc: Johannes Weiner <hannes@...xchg.org>
---
include/linux/sched/coredump.h | 1 +
mm/oom_kill.c | 4 ++--
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index ec912d0..ecdc654 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -71,6 +71,7 @@ static inline int get_dumpable(struct mm_struct *mm)
#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zero page */
#define MMF_DISABLE_THP 24 /* disable THP for all VMAs */
#define MMF_OOM_VICTIM 25 /* mm is the oom victim */
+#define MMF_OOM_REAP_QUEUED 26 /* mm was queued for oom_reaper */
#define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP)
#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f0e8cd9..059e617 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -647,8 +647,8 @@ static int oom_reaper(void *unused)
static void wake_oom_reaper(struct task_struct *tsk)
{
- /* tsk is already queued? */
- if (tsk == oom_reaper_list || tsk->oom_reaper_list)
+ /* mm is already queued? */
+ if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
return;
get_task_struct(tsk);
--
1.8.3.1
Powered by blists - more mailing lists