lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <201512301533.JDJ18237.QOFOMVSFtHOJLF@I-love.SAKURA.ne.jp>
Date:	Wed, 30 Dec 2015 15:33:47 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...nel.org, akpm@...ux-foundation.org
Cc:	mgorman@...e.de, rientjes@...gle.com,
	torvalds@...ux-foundation.org, oleg@...hat.com, hughd@...gle.com,
	andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [RFC][PATCH] sysrq: ensure manual invocation of the OOM killer under OOM livelock

>>From 7fcac2054b33dc3df6c5915a58f232b9b80bb1e6 Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Date: Wed, 30 Dec 2015 15:24:40 +0900
Subject: [RFC][PATCH] sysrq: ensure manual invocation of the OOM killer under OOM livelock

This patch is similar to what commit 373ccbe5927034b5 ("mm, vmstat:
allow WQ concurrency to discover memory reclaim doesn't make any
progress") does, but this patch is for SysRq-f.

SysRq-f is a method for reclaiming memory by manually invoking the OOM
killer. Therefore, it needs to be invokable even when the system is
looping under OOM livelock condition.

While making sure that we give workqueue items a chance to run is
done by "mm,oom: Always sleep before retrying." patch, allocating
a dedicated workqueue only for SysRq-f might be too wasteful when
there is the OOM reaper kernel thread which will be idle when
we need to use SysRq-f due to OOM livelock condition.

I wish for a kernel thread that does OOM-kill operation.
Maybe we can change the OOM reaper kernel thread to do it.
What do you think?

Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
---
 drivers/tty/sysrq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
index e513940..55407c9 100644
--- a/drivers/tty/sysrq.c
+++ b/drivers/tty/sysrq.c
@@ -373,11 +373,12 @@ static void moom_callback(struct work_struct *ignored)
 	mutex_unlock(&oom_lock);
 }
 
+static struct workqueue_struct *sysrq_moom_wq;
 static DECLARE_WORK(moom_work, moom_callback);
 
 static void sysrq_handle_moom(int key)
 {
-	schedule_work(&moom_work);
+	queue_work(sysrq_moom_wq, &moom_work);
 }
 static struct sysrq_key_op sysrq_moom_op = {
 	.handler	= sysrq_handle_moom,
@@ -1123,6 +1124,7 @@ static inline void sysrq_init_procfs(void)
 static int __init sysrq_init(void)
 {
 	sysrq_init_procfs();
+	sysrq_moom_wq = alloc_workqueue("sysrq", WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
 
 	if (sysrq_on())
 		sysrq_register_handler();
-- 
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ