lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Jan 2018 18:50:33 +0300
From:   Kirill Tkhai <ktkhai@...tuozzo.com>
To:     linux-kernel@...r.kernel.org, gregkh@...uxfoundation.org,
        jslaby@...e.com, viro@...iv.linux.org.uk, keescook@...omium.org,
        serge@...lyn.com, james.l.morris@...cle.com, luto@...nel.org,
        john.johansen@...onical.com, oleg@...hat.com, mingo@...nel.org,
        akpm@...ux-foundation.org, mhocko@...e.com, peterz@...radead.org,
        ktkhai@...tuozzo.com
Subject: [PATCH 4/4] tty: Use RCU read lock to iterate tasks in __do_SAK()

There were made several efforts to make __do_SAK()
working in process context long ago, but it does
not solves the problem completely. Since __do_SAK()
may take tasklist_lock for a long time, the concurent
processes waiting for write lock with interrupts
disabled (e.g., forking), get into the same situation
like __do_SAK() would have been executed in interrupt
context. I've observed several hard lockups on 3.10
kernel running 200 containers, caused by long duration
of copy_process()->write_lock_irq() after SAK was sent
to a tty. Current mainline kernel has the same problem.
So, this patch solves the problem and makes __do_SAK()
to be not greedy of tasklist_lock.

The solution is to use RCU to iterate processes and files.
Task list integrity is the only reason we taken tasklist_lock
before, as tty subsys primitives mostly take it for reading
also (e.g., __proc_set_tty). RCU read lock is enough for that.
Task list lock is taken for a small duration to check we've
iterated all the list and do not race with newly forked children.
It's not really useful, but we iterated the whole list before,
so let's save the old behaviour.

This patch covers all the cases, when __do_SAK() used
to find a tty-related task before. Also, I reordered
file iterations and p->signal->tty check and added
tty_lock() to close small race with the case, when
task obtains tty fd and then calls __proc_set_tty()
in parallel with its processing.

The patch is aimed to prevent hard lockups I've pointed above.

Signed-off-by: Kirill Tkhai <ktkhai@...tuozzo.com>
---
 drivers/tty/tty_io.c |   28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 94813ae40983..2b6afe9e7bbb 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -2707,6 +2707,7 @@ void __do_SAK(struct tty_struct *tty)
 	struct task_struct *p;
 	struct pid *session;
 	int		i;
+	bool locked = false;
 
 	if (!tty)
 		return;
@@ -2723,15 +2724,12 @@ void __do_SAK(struct tty_struct *tty)
 			   task_pid_nr(p), p->comm);
 		send_sig(SIGKILL, p, 1);
 	} while_each_pid_task(session, PIDTYPE_SID, p);
+	read_unlock(&tasklist_lock);
 
+	tty_lock(tty);
+	rcu_read_lock();
 	/* Now kill any processes that happen to have the tty open */
 	for_each_process(p) {
-		if (p->signal->tty == tty) {
-			tty_notice(tty, "SAK: killed process %d (%s): by controlling tty\n",
-				   task_pid_nr(p), p->comm);
-			send_sig(SIGKILL, p, 1);
-			continue;
-		}
 		task_lock(p);
 		i = iterate_fd(p->files, 0, this_tty, tty);
 		if (i != 0) {
@@ -2740,8 +2738,26 @@ void __do_SAK(struct tty_struct *tty)
 			force_sig(SIGKILL, p);
 		}
 		task_unlock(p);
+
+		/*
+		 * p->signal is always valid for task_struct obtained
+		 * from the task list under rcu_read_lock().
+		 */
+		if (!i && p->signal->tty == tty) {
+			tty_notice(tty, "SAK: killed process %d (%s): by controlling tty\n",
+				   task_pid_nr(p), p->comm);
+			send_sig(SIGKILL, p, 1);
+		}
+
+		if (unlikely(next_task(p) == &init_task && !locked)) {
+			/* Take the lock to pick newly forked tasks */
+			read_lock(&tasklist_lock);
+			locked = true;
+		}
 	}
 	read_unlock(&tasklist_lock);
+	rcu_read_unlock();
+	tty_unlock(tty);
 #endif
 }
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ