[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AANLkTikk7=XBmMCskyh1zq3z_3+kZCaOyDjY5hFyiKjx@mail.gmail.com>
Date: Thu, 31 Mar 2011 14:38:45 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Jones <davej@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Alan Cox <alan@...rguk.ukuu.org.uk>, Greg KH <gregkh@...e.de>
Subject: Re: excessive kworker activity when idle. (was Re: vma corruption in
today's -git)
On Thu, Mar 31, 2011 at 9:21 AM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Thu, Mar 31, 2011 at 8:53 AM, Linus Torvalds
> <torvalds@...ux-foundation.org> wrote:
>>
>> Regardless, I'll put my money where my mouth is, and try to remove the
>> crazy re-flush thing.
>
> Yeah, that doesn't work. The tty does actually lock up when it fills
> up. So clearly we actually depended on that reflushing happening.
>
> That said, I do still think it's the right thing to do to remove that
> line, it just means that I need to figure out where the missing flush
> is.
Ok, that was unexpected.
So the reason for the need to do that crazy "try to flush from the
flush routine" is that in the case of "receive_room" going down to
zero (which only happens for n_tty and for the case of switching
ldisc's around), if we hit that during flushing, nothing will
apparently ever re-start the flushing when receive_room then opens up
again.
So instead of having that case re-start the flush, we end up saying
"ok, we'll just retry the flush over and over again", and essentially
poll for receive_room opening up. No wonder you've seen high CPU use
and thousands of calls a second.
The "seen_tail" case doesn't have that issue, because anything that
adds a new buffer to the tty list should always be flipping anyway. So
this attached patch would seem to work.
Not heavily tested, but the case that I could trivially trigger before
doesn't trigger for me any more. And I can't seem to get kworker to
waste lots of CPU time any more, but it was kind of hit-and-miss
before too, so I don't know how much that's worth..
The locking here is kind of iffy, but otherwise? Comments?
Linus
View attachment "patch.diff" of type "text/x-patch" (1520 bytes)
Powered by blists - more mailing lists