[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimPi-Vbi3DNLEeShCpfOJEuUZo3--3CQ_BMgJiS@mail.gmail.com>
Date: Wed, 30 Mar 2011 20:44:06 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Jones <davej@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: excessive kworker activity when idle. (was Re: vma corruption in
today's -git)
On Wed, Mar 30, 2011 at 8:34 PM, Dave Jones <davej@...hat.com> wrote:
>
> so 'perf kmem record sleep 5' shows hundreds of calls kmem_cache_free from
> the kworker processes. Called from shmem_i_callback, __d_free and file_free_rcu.
> My guess is that my fuzzing caused so many allocations that the rcu freeing is
> still ongoing an hour or so after the process has quit. does that make any sense?
No, that shouldn't be the case. RCU freeing should go on for just a
few RCU periods, and be done. I think there is some "limit the work we
do for rcu each time" in order to not have bad latencies, but even so
that shouldn't take _that_ long. And as you say, you should see the
freeing in the slab stats.
So clearly there are shmem inodes being destroyed, but it shouldn't be
from an hour ago. I wonder if your system isn't as idle as you think
it is.
But I'm cc'ing Paul, maybe he'll disagree and say it's expected and
that the RCU batch size is really small. Or at least give some hint
about how to check the pending rcu state.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists