[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091123102844.25757450@lxorguk.ukuu.org.uk>
Date: Mon, 23 Nov 2009 10:28:44 +0000
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>, Robert Swan <swan.r.l@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [bisected] pty performance problem
> Hm. Looks to me like it's doing what it was told to do.
Yes. I realised this morning too.
>
> diff --git a/drivers/char/tty_buffer.c b/drivers/char/tty_buffer.c
> index 66fa4e1..92a0864 100644
> --- a/drivers/char/tty_buffer.c
> +++ b/drivers/char/tty_buffer.c
> @@ -495,7 +495,7 @@ void tty_flip_buffer_push(struct tty_struct *tty)
> if (tty->low_latency)
> flush_to_ldisc(&tty->buf.work.work);
> else
> - schedule_delayed_work(&tty->buf.work, 1);
> + schedule_delayed_work(&tty->buf.work, 0);
> }
> EXPORT_SYMBOL(tty_flip_buffer_push);
>
> Telling it to execute now made test proggy happy.. and likely broke tons
> of things that need a delay there. So, what's wrong with delaying, when
> that's what the customer asked for? /me must be missing something. It
> could know that no delay is needed?
The old model the tty code used was to queue bytes and process them each
timer tick. The idea is that this avoids thrashing the locks and stuff
gets processed more efficiently.
It's probably completely the wrong model today and removing the delay
will now only hit fine grained locks, and will get better flow control
behaviour at high speeds.
Try it and see - worst case it becomes some kind of per tty property.
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists