[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091123113110.15063a0a@lxorguk.ukuu.org.uk>
Date: Mon, 23 Nov 2009 11:31:10 +0000
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>, Robert Swan <swan.r.l@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [bisected] pty performance problem
> diff --git a/drivers/char/tty_buffer.c b/drivers/char/tty_buffer.c
> index 66fa4e1..92a0864 100644
> --- a/drivers/char/tty_buffer.c
> +++ b/drivers/char/tty_buffer.c
> @@ -495,7 +495,7 @@ void tty_flip_buffer_push(struct tty_struct *tty)
> if (tty->low_latency)
> flush_to_ldisc(&tty->buf.work.work);
> else
> - schedule_delayed_work(&tty->buf.work, 1);
> + schedule_delayed_work(&tty->buf.work, 0);
> }
> EXPORT_SYMBOL(tty_flip_buffer_push);
Another possibility is to do
if (tty->low_latency)
schedule_delayed_work(&tty->buf.work, 0);
else
schedule_delayed_work(&tty->buf.work, 1);
At the moment the ->low_latency flag is just used by a few drivers that
want to avoid a double delay (eg if the process events off a work queue
not in the IRQ handler), and they have to jump through other hoops around
re-entrancy. Doing it that way might make the processing a miniscule more
slow for those cases but would also let the low_latency flag do something
useful for the many drivers that don't work byte at a time and want stuff
to be blocked up.
The general case is things like UART chips where you get one char and one
push per byte receiving so usually don't want one run through the ldisc
each time when a lot of data is arriving.
If low_latency means schedule ASAP not call inline then pty can use
low_latency and we can avoid suddenly changing the behaviour of every
device but instead do them as they make sense (eg all USB should probably
be low_latency with that queueing change)
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists