[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070125151639.GH7582@csclub.uwaterloo.ca>
Date: Thu, 25 Jan 2007 10:16:39 -0500
From: lsorense@...lub.uwaterloo.ca (Lennart Sorensen)
To: Paul Fulghum <paulkf@...rogate.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: Strange problem with tty layer
On Wed, Jan 24, 2007 at 03:20:53PM -0600, Paul Fulghum wrote:
> In 2.6.16 the tty buffering pushes data to the line
> discipline without regard to tty->receive_room.
> If the line discipline can't keep up, the data gets dropped.
> I observed this data loss at higher speeds when
> placing the system under heavy load.
>
> 2.6.18 added code to respect tty->receive_room.
>
> This may or may not be your problem, but you should
> be able to check by adding a conditional printk
> to drivers/char/tty_io.c:flush_to_ldisc()
>
> If tty->receive_room is less than the size of the buffer
> passed to disc->receive_buf() then you are losing data.
I am now trying this, which so far seem to help (I had a printk in there
earlier and managed to trigger that).
--- ori/drivers/char/tty_io.c 2007-01-24 18:02:48.000000000 -0500
+++ new/drivers/char/tty_io.c 2007-01-25 09:50:02.000000000 -0500
@@ -2774,6 +2778,14 @@
spin_lock_irqsave(&tty->buf.lock, flags);
while((tbuf = tty->buf.head) != NULL) {
while ((count = tbuf->commit - tbuf->read) != 0) {
+ if (!tty->receive_room) {
+ schedule_delayed_work(&tty->buf.work, 1);
+ spin_unlock_irqrestore(&tty->buf.lock, flags);
+ goto out;
+ }
+ if (count > tty->receive_room) {
+ count = tty->receive_room;
+ }
char_buf = tbuf->char_buf_ptr + tbuf->read;
flag_buf = tbuf->flag_buf_ptr + tbuf->read;
tbuf->read += count;
This appeared to be (essentially) the key change in 2.6.18 related to
the check of tty->receive_room.
I will now run a bunch more tests to see if it manages to keep it from
having any more character losses.
Thank you for the suggestion of where to look.
--
Len Sorensen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists