lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Mar 2015 11:56:24 -0400
From:	Peter Hurley <peter@...leysoftware.com>
To:	Andy Whitcroft <apw@...onical.com>
CC:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Jiri Slaby <jslaby@...e.cz>, linux-kernel@...r.kernel.org,
	Alan Cox <alan@...ux.intel.com>
Subject: Re: [PATCH 1/1] pty, n_tty: continue processing input until the tty_buffer
 chain is flushed

Hi Andy,

On 03/16/2015 10:30 AM, Andy Whitcroft wrote:
> In commit 52bce7f8d4fc ("pty, n_tty: Simplify input processing on
> final close") the tty_flush_to_ldisc() data flush was moved from the
> n_tty_input() path to the pty_close() path because the new locking ensured
> that the data had already been copied:
> 
> However, this only guarantees that it will see _some_ of the pending
> data, we cannot guarantee that all of the queued data will fit into the
> output buffer.  When the output buffer fills the flush worker triggered
> in pty_close will complete leaving the remaining data queued in the
> tty_buffer chain.
> 
> When this occurs the reader will see the initial tranch of data correctly
> and consume it.  As we return this to the caller we will spot the
> additional pending data and trigger another flush worker.  So far so good.
> 
> However, we now have a race, if the consumer gets back into pty_read
> before the flush worker is actually able to actually progress the queue,
> it will see the see the tty input buffer empty, the other end marked
> TTY_OTHER_CLOSED and return EIO to the caller even though data is en-route.
> 
> Fix this by ignoring TTY_OTHER_CLOSED while there remain unflushed buffers.
> 
> Fixes: 52bce7f8d4fc ("pty, n_tty: Simplify input processing on final close")
> BugLink: http://bugs.launchpad.net/bugs/1429756
> Cc: <stable@...r.kernel.org> # 3.19+
> Signed-off-by: Andy Whitcroft <apw@...onical.com>
> ---
>  drivers/tty/n_tty.c      |  4 +++-
>  drivers/tty/tty_buffer.c | 19 +++++++++++++++++++
>  include/linux/tty_flip.h |  2 ++
>  3 files changed, 24 insertions(+), 1 deletion(-)
> 
> 	This was found during self-tests for the upstart init system which
> 	tests log handling by pumping large chunks of /dev/zero through
> 	to various logs in parallel.  This was failing with short files
> 	at random, which was traced back to this race.
> 
> 	Build tested against v4.0-rc4, heavily run tested against v3.19.1.
> 
> 	-apw
> 
> diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
> index cf6e0f2..76b38f0 100644
> --- a/drivers/tty/n_tty.c
> +++ b/drivers/tty/n_tty.c
> @@ -38,6 +38,7 @@
>  #include <linux/sched.h>
>  #include <linux/interrupt.h>
>  #include <linux/tty.h>
> +#include <linux/tty_flip.h>
>  #include <linux/timer.h>
>  #include <linux/ctype.h>
>  #include <linux/mm.h>
> @@ -2236,7 +2237,8 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
>  			ldata->minimum_to_wake = (minimum - (b - buf));
>  
>  		if (!input_available_p(tty, 0)) {
> -			if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
> +			if (test_bit(TTY_OTHER_CLOSED, &tty->flags) &&
> +			    !tty_data_pending_to_ldisc(tty)) {
>  				retval = -EIO;
>  				break;
>  			}
> diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
> index 7566164..04cfabb 100644
> --- a/drivers/tty/tty_buffer.c
> +++ b/drivers/tty/tty_buffer.c
> @@ -424,6 +424,25 @@ receive_buf(struct tty_struct *tty, struct tty_buffer *head, int count)
>  	return count;
>  }
>  
> +int tty_data_pending_to_ldisc(struct tty_struct *tty)
> +{
> +	struct tty_bufhead *buf = &tty->port->buf;
> +	struct tty_buffer *head = buf->head;
> +
> +	struct tty_buffer *next;
> +	int count;
> +
> +	next = head->next;
> +	/* paired w/ barrier in __tty_buffer_request_room();
> +	 * ensures commit value read is not stale if the head
> +	 * is advancing to the next buffer
> +	 */
> +	smp_rmb();
> +	count = head->commit - head->read;
> +
> +	return (count || next);

Thanks for trying to fix this.
Unfortunately, there's 2 problems with this approach:

1. Data in the tty buffers does not guarantee data will be generated
   in the read buffer (and thus cause this reader to wake up to handle
   new data). For example, if the remaining data is all non-output
   Ctrl values or the data has eraser sequences that results in no output,
   then the reader will hang.

2. flush_to_ldisc() does not update head->read in a manner that allows
   an observer to determine the flush_to_ldisc() worker state. So
   tty_data_pending_to_ldisc() could succeed when it should not have
   and again the reader will hang. For example, the flush_to_ldisc()
   input worker could have been preempted after waking the reader but
   before updating head->read (or even ldata->no_room), so if the read()
   completes and is re-issued, then it will observe no input_available(),
   TTY_OTHER_CLOSED and head->commit - head->read != 0.

I think the right way to fix this is to view the input worker as a pipeline
stage and to pipeline the signals with the data; I'm studying that solution
right now.

Regards,
Peter Hurley


> +}
> +
>  /**
>   *	flush_to_ldisc
>   *	@work: tty structure passed from work queue.
> diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h
> index c28dd52..a896b94 100644
> --- a/include/linux/tty_flip.h
> +++ b/include/linux/tty_flip.h
> @@ -38,4 +38,6 @@ static inline int tty_insert_flip_string(struct tty_port *port,
>  extern void tty_buffer_lock_exclusive(struct tty_port *port);
>  extern void tty_buffer_unlock_exclusive(struct tty_port *port);
>  
> +extern int tty_data_pending_to_ldisc(struct tty_struct *tty);
> +
>  #endif /* _LINUX_TTY_FLIP_H */
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ