[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTikt1ZZNC1YffT_bRAX6GwjWDk_T3Q@mail.gmail.com>
Date: Tue, 7 Jun 2011 19:44:48 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Guillaume Chazarain <guichaz@...il.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>, gregkh@...e.de,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Felipe Balbi <balbi@...com>, Tejun Heo <tj@...nel.org>
Subject: Re: tty breakage in X (Was: tty vs workqueue oddities)
On Mon, Jun 6, 2011 at 7:24 AM, Guillaume Chazarain <guichaz@...il.com> wrote:
>
> After reverting http://git.kernel.org/linus/a5660b4 "tty: fix endless
> work loop when the buffer fills up" I cannot reproduce the hangs on
> SMP anymore but it brings back the busy loop on UP.
Hmm. The n_tty layer has some rather distressing locking, and doesn't
lock "tty->receive_room" changes at all, for example (and uses
multiple locks for some other things).
It may well be that there is some SMP race there. The n_tty line
discipline has its own locking for its counts, and the tty buffer code
has its own locking, and "receive_room" kind o fends up being in the
middle between them.
The sad part is that the patch that made receive_buf() return the
amout of bytes received was actually trying to do the right thing, it
just did it entirely in the wrong way (re-introducing the crazy
re-arming of the workqueue from within itself, and using all the wrong
sign issues).
I'd love to get rid of receive_room entirely - and just letting the
tty line discipline handler say how much it actually received. in
other words, having receive_buf() just tell us how much it used, and
not looking at receive_room in the caller is absolutely the right
thing.
It just needs to be done properly.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists