[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <469D2713.30703@microgate.com>
Date: Tue, 17 Jul 2007 14:31:15 -0600
From: Paul Fulghum <paulkf@...rogate.com>
To: James Simmons <jsimmons@...radead.org>
CC: Samuel Thibault <samuel.thibault@...-lyon.org>,
Linus Torvalds <torvalds@...l.org>,
Linux console project <linuxconsole-dev@...ts.sourceforge.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Use tty_schedule in VT code.
James Simmons wrote:
>> James Simmons, le Tue 17 Jul 2007 19:37:57 +0100, a écrit :
>>> - schedule_delayed_work(&t->buf.work, 0);
>> It was schedule_delayed_work(&t->buf.work, 1); in con_schedule_flip() ;
>> could that matter?
>
> I did not detect any regressions.
The console behavior stays exactly the same as the patch
changes tty_schedule_flip to use the 0 delay. The change
to tty_schedule_flip() to use 0 delay also is OK. I had
looked at this when James originally posted this patch and found:
I looked further back and in the 2.4 kernels this scheduling
was done with the timer task queue (process receive data on
next timer tick).
I guess the schedule_delayed_work() with a time delay of 1
was the best approximation of the previous behavior.
There is no logical reason to delay the first attempt at
processing receive data so schedule_delayed_work() in
tty_schedule_flip() should be changed to 0 (as was the
case for con_schedule_flip).
The schedule_delayed_work in flush_to_ldisc() will continue
to use a delay of 1 if the ldisc can't accept more data.
This allows the user app and ldisc to catch up.
Subsequent calls to tty_schedule_flip won't affect
this 'back off' delay because once the work is scheduled
(with a delay of 1) new scheduling calls are ignored for
the same work structure.
I've been testing the change to 0 in tty_schedule_flip()
under various loads and data rates with no ill effects.
--
Paul Fulghum
Microgate Systems, Ltd.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists