lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Feb 2016 19:36:36 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...nel.org, akpm@...ux-foundation.org
Cc:	rientjes@...gle.com, mgorman@...e.de, oleg@...hat.com,
	torvalds@...ux-foundation.org, hughd@...gle.com, andrea@...nel.org,
	riel@...hat.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1

>>From 0b36864d4100ecbdcaa2fc2d1927c9e270f1b629 Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Date: Wed, 17 Feb 2016 16:37:59 +0900
Subject: [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1

Currently, out_of_memory() does not wait for existing TIF_MEMDIE threads
if /proc/sys/vm/oom_kill_allocating_task is set to 1. This can result in
killing more OOM victims than needed. We can wait for the OOM reaper to
reap memory used by existing TIF_MEMDIE threads if possible. If the OOM
reaper is not available, the system will be kept OOM stalled until an
OOM-unkillable thread does a GFP_FS allocation request and calls
oom_kill_allocating_task == 0 path.

This patch changes oom_kill_allocating_task == 1 case to call
select_bad_process() in order to wait for existing TIF_MEMDIE threads.
Since "mm,oom: exclude TIF_MEMDIE processes from candidates.",
"mm,oom: don't abort on exiting processes when selecting a victim.",
"mm,oom: exclude oom_task_origin processes if they are OOM victims.",
"mm,oom: exclude oom_task_origin processes if they are OOM-unkillable."
and "mm,oom: Re-enable OOM killer using timers." made sure that we never
wait for TIF_MEMDIE threads forever, waiting for TIF_MEMDIE threads for
oom_kill_allocating_task == 1 does not cause OOM livelock problem.

After this patch, we can safely merge the OOM reaper in the simplest
form, without worrying about corner cases.

Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
---
 mm/oom_kill.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index fba2c62..9cd1cd1 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -737,15 +737,6 @@ bool out_of_memory(struct oom_control *oc)
 		oc->nodemask = NULL;
 	check_panic_on_oom(oc, constraint, NULL);
 
-	if (sysctl_oom_kill_allocating_task && current->mm &&
-	    !oom_unkillable_task(current, NULL, oc->nodemask) &&
-	    current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
-		get_task_struct(current);
-		oom_kill_process(oc, current, 0, totalpages, NULL,
-				 "Out of memory (oom_kill_allocating_task)");
-		return true;
-	}
-
 	p = select_bad_process(oc, &points, totalpages);
 	/* Found nothing?!?! Either we hang forever, or we panic. */
 	if (!p && !is_sysrq_oom(oc)) {
@@ -753,8 +744,18 @@ bool out_of_memory(struct oom_control *oc)
 		panic("Out of memory and no killable processes...\n");
 	}
 	if (p && p != (void *)-1UL) {
-		oom_kill_process(oc, p, points, totalpages, NULL,
-				 "Out of memory");
+		const char *message = "Out of memory";
+
+		if (sysctl_oom_kill_allocating_task && current->mm &&
+		    !oom_unkillable_task(current, NULL, oc->nodemask) &&
+		    current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
+			put_task_struct(p);
+			p = current;
+			get_task_struct(p);
+			message = "Out of memory (oom_kill_allocating_task)";
+			points = 0;
+		}
+		oom_kill_process(oc, p, points, totalpages, NULL, message);
 		/*
 		 * Give the killed process a good chance to exit before trying
 		 * to allocate memory again.
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ