lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 07 Nov 2011 22:26:15 +0400
From:	Ilya Zykov <ilya@...x.ru>
To:	Alan Cox <alan@...ux.intel.com>
CC:	Greg Kroah-Hartman <gregkh@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: PROBLEM: Race condition in tty buffer's function flush_to_ldisc().

Alan Cox wrote:

>> Of course I know about tty_buffer_flush(), it only read TTY_FLUSHING,
>> it can't change TTY_FLUSHING, if flush_to_ldisc() single threaded,
>> we can change TTY_FLUSHING only in one place in one time(in
>> flush_to_ldisc()), therefor we can use only "set_bit(TTY_FLUSHING,
>> &tty->flags)" without test.
> 
> Yes.. if you can pin down why in your testing you see the other case
> sometimes being true.
> 
> Alan
>  

Nested call flush_to_ldisc() happened on different CPU only. It happens because
one side pty call "schedule_work(&tty->buf.work)" on one CPU from "tty_flip_buffer_push()",
the other side call "schedule_work(&tty->buf.work)" from n_tty's layer from "n_tty_set_room()"
on different CPU. It also can happen in interrupt if interrupt handle on 
different CPU. Schedule_work() schedule work on CPU which was called(IMHO).

For testing I use in xterminal: cat big_file
and flush_to_ldisc() with this patch:

diff -uprN -X ../../../dontdiff a/drivers/tty/tty_buffer.c c/drivers/tty/tty_buffer.c
--- a/drivers/tty/tty_buffer.c	2011-11-07 14:45:27.000000000 +0400
+++ c/drivers/tty/tty_buffer.c	2011-11-07 21:48:49.000000000 +0400
@@ -405,13 +405,14 @@ static void flush_to_ldisc(struct work_s
 		container_of(work, struct tty_struct, buf.work);
 	unsigned long 	flags;
 	struct tty_ldisc *disc;
+	static int mthread;
 
 	disc = tty_ldisc_ref(tty);
 	if (disc == NULL)	/*  !TTY_LDISC */
 		return;
 
 	spin_lock_irqsave(&tty->buf.lock, flags);
-
+	mthread = 0;
 	if (!test_and_set_bit(TTY_FLUSHING, &tty->flags)) {
 		struct tty_buffer *head;
 		while ((head = tty->buf.head) != NULL) {
@@ -445,6 +446,12 @@ static void flush_to_ldisc(struct work_s
 			spin_lock_irqsave(&tty->buf.lock, flags);
 		}
 		clear_bit(TTY_FLUSHING, &tty->flags);
+	} else {
+		mthread = 1;
+		printk(KERN_WARNING "Tty %s FLUSHING CPU %d.\n", tty->name, percpu_read(cpu_number));
+	}
+	if (mthread) {
+		printk(KERN_WARNING "Tty %s FLUSHING multithreaded CPU %d.\n", tty->name, percpu_read(cpu_number));
 	}
 
 	/* We may have a deferred request to flush the input buffer,

And get in my syslog:

Nov  7 21:34:13 serh kernel: [   76.323848] Tty ptm0 FLUSHING CPU 0.
Nov  7 21:34:13 serh kernel: [   76.323850] Tty ptm0 FLUSHING multithreaded CPU 0.
Nov  7 21:34:13 serh kernel: [   76.323856] Tty ptm0 FLUSHING CPU 0.
Nov  7 21:34:13 serh kernel: [   76.323857] Tty ptm0 FLUSHING multithreaded CPU 0.
Nov  7 21:34:13 serh kernel: [   76.323861] Tty ptm0 FLUSHING multithreaded CPU 1.
Nov  7 21:34:13 serh kernel: [   76.336022] Tty ptm0 FLUSHING CPU 0.
Nov  7 21:34:13 serh kernel: [   76.336024] Tty ptm0 FLUSHING multithreaded CPU 0.
Nov  7 21:34:13 serh kernel: [   76.336030] Tty ptm0 FLUSHING multithreaded CPU 1.
Nov  7 21:34:13 serh kernel: [   76.353134] Tty ptm0 FLUSHING CPU 0.
Nov  7 21:34:13 serh kernel: [   76.353136] Tty ptm0 FLUSHING multithreaded CPU 0.
Nov  7 21:34:13 serh kernel: [   76.353143] Tty ptm0 FLUSHING CPU 0.
Nov  7 21:34:13 serh kernel: [   76.353145] Tty ptm0 FLUSHING multithreaded CPU 0.
Nov  7 21:34:13 serh kernel: [   76.353148] Tty ptm0 FLUSHING multithreaded CPU 1.
......
......
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ