[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190107143802.16847-2-mhocko@kernel.org>
Date: Mon, 7 Jan 2019 15:38:01 +0100
From: Michal Hocko <mhocko@...nel.org>
To: <linux-mm@...ck.org>
Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>
Subject: [PATCH 1/2] mm, oom: marks all killed tasks as oom victims
From: Michal Hocko <mhocko@...e.com>
Historically we have called mark_oom_victim only to the main task
selected as the oom victim because oom victims have access to memory
reserves and granting the access to all killed tasks could deplete
memory reserves very quickly and cause even larger problems.
Since only a partial access to memory reserves is allowed there is no
longer this risk and so all tasks killed along with the oom victim
can be considered as well.
The primary motivation for that is that process groups which do not
shared signals would behave more like standard thread groups wrt oom
handling (aka tsk_is_oom_victim will work the same way for them).
- Use find_lock_task_mm to stabilize mm as suggested by Tetsuo
Signed-off-by: Michal Hocko <mhocko@...e.com>
---
mm/oom_kill.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f0e8cd9edb1a..0246c7a4e44e 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -892,6 +892,7 @@ static void __oom_kill_process(struct task_struct *victim)
*/
rcu_read_lock();
for_each_process(p) {
+ struct task_struct *t;
if (!process_shares_mm(p, mm))
continue;
if (same_thread_group(p, victim))
@@ -911,6 +912,11 @@ static void __oom_kill_process(struct task_struct *victim)
if (unlikely(p->flags & PF_KTHREAD))
continue;
do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID);
+ t = find_lock_task_mm(p);
+ if (!t)
+ continue;
+ mark_oom_victim(t);
+ task_unlock(t);
}
rcu_read_unlock();
--
2.20.1
Powered by blists - more mailing lists