[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190204103604.271982539@linuxfoundation.org>
Date: Mon, 4 Feb 2019 11:36:45 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
syzbot+7fbbfa368521945f0e3d@...kaller.appspotmail.com,
Shakeel Butt <shakeelb@...gle.com>,
Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
David Rientjes <rientjes@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 3.18 30/31] mm, oom: fix use-after-free in oom_kill_process
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shakeel Butt <shakeelb@...gle.com>
commit cefc7ef3c87d02fc9307835868ff721ea12cc597 upstream.
Syzbot instance running on upstream kernel found a use-after-free bug in
oom_kill_process. On further inspection it seems like the process
selected to be oom-killed has exited even before reaching
read_lock(&tasklist_lock) in oom_kill_process(). More specifically the
tsk->usage is 1 which is due to get_task_struct() in oom_evaluate_task()
and the put_task_struct within for_each_thread() frees the tsk and
for_each_thread() tries to access the tsk. The easiest fix is to do
get/put across the for_each_thread() on the selected task.
Now the next question is should we continue with the oom-kill as the
previously selected task has exited? However before adding more
complexity and heuristics, let's answer why we even look at the children
of oom-kill selected task? The select_bad_process() has already selected
the worst process in the system/memcg. Due to race, the selected
process might not be the worst at the kill time but does that matter?
The userspace can use the oom_score_adj interface to prefer children to
be killed before the parent. I looked at the history but it seems like
this is there before git history.
Link: http://lkml.kernel.org/r/20190121215850.221745-1-shakeelb@google.com
Reported-by: syzbot+7fbbfa368521945f0e3d@...kaller.appspotmail.com
Fixes: 6b0c81b3be11 ("mm, oom: reduce dependency on tasklist_lock")
Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
Reviewed-by: Roman Gushchin <guro@...com>
Acked-by: Michal Hocko <mhocko@...e.com>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: <stable@...r.kernel.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/oom_kill.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -464,6 +464,13 @@ void oom_kill_process(struct task_struct
* still freeing memory.
*/
read_lock(&tasklist_lock);
+
+ /*
+ * The task 'p' might have already exited before reaching here. The
+ * put_task_struct() will free task_struct 'p' while the loop still try
+ * to access the field of 'p', so, get an extra reference.
+ */
+ get_task_struct(p);
for_each_thread(p, t) {
list_for_each_entry(child, &t->children, sibling) {
unsigned int child_points;
@@ -483,6 +490,7 @@ void oom_kill_process(struct task_struct
}
}
}
+ put_task_struct(p);
read_unlock(&tasklist_lock);
p = find_lock_task_mm(victim);
Powered by blists - more mailing lists