[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110426185036.GG2135@linux.vnet.ibm.com>
Date: Tue, 26 Apr 2011 11:50:36 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Bruno Prémont <bonbons@...ux-vserver.org>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Frysinger <vapier.adi@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org,
"Paul E. McKenney" <paul.mckenney@...aro.org>,
Pekka Enberg <penberg@...nel.org>
Subject: Re: 2.6.39-rc4+: Kernel leaking memory during FS scanning,
regression?
On Tue, Apr 26, 2011 at 10:12:39AM -0700, Linus Torvalds wrote:
> On Tue, Apr 26, 2011 at 9:38 AM, Bruno Prémont
> <bonbons@...ux-vserver.org> wrote:
> >
> > Here it comes:
> >
> > rcu_kthread (when build processes are STOPped):
> > [ 836.050003] rcu_kthread R running 7324 6 2 0x00000000
> > [ 836.050003] dd473f28 00000046 5a000240 dd65207c dd407360 dd651d40 0000035c dd473ed8
> > [ 836.050003] c10bf8a2 c14d63d8 dd65207c dd473f28 dd445040 dd445040 dd473eec c10be848
> > [ 836.050003] dd651d40 dd407360 ddfdca00 dd473f14 c10bfde2 00000000 00000001 000007b6
> > [ 836.050003] Call Trace:
> > [ 836.050003] [<c10bf8a2>] ? check_object+0x92/0x210
> > [ 836.050003] [<c10be848>] ? init_object+0x38/0x70
> > [ 836.050003] [<c10bfde2>] ? free_debug_processing+0x112/0x1f0
> > [ 836.050003] [<c103d9fd>] ? lock_timer_base+0x2d/0x70
> > [ 836.050003] [<c13c8ec7>] schedule_timeout+0x137/0x280
>
> Hmm.
>
> I'm adding Ingo and Peter to the cc, because this whole "rcu_kthread
> is running, but never actually running" is starting to smell like a
> scheduler issue.
>
> Peter/Ingo: RCUTINY seems to be broken for Bruno. During any kind of
> heavy workload, at some point it looks like rcu_kthread simply stops
> making any progress. It's constantly in runnable state, but it doesn't
> actually use any CPU time, and it's not processing the RCU callbacks,
> so the RCU memory freeing isn't happening, and slabs just build up
> until the machine dies.
>
> And it really is RCUTINY, because the thing doesn't happen with the
> regular tree-RCU.
The difference between TINY_RCU and TREE_RCU is that TREE_RCU still uses
softirq for the core RCU processing. TINY_RCU switched to a kthread
when I implemented RCU priority boosting. There is a similar change in
my -rcu tree that makes TREE_RCU use kthreads, and Sedat has been running
into a very similar problem with that change in place. Which is why I
do not yet push it to the -next tree.
> This is without CONFIG_RCU_BOOST_PRIO, so we basically have
>
> struct sched_param sp;
>
> rcu_kthread_task = kthread_run(rcu_kthread, NULL, "rcu_kthread");
> sp.sched_priority = RCU_BOOST_PRIO;
> sched_setscheduler_nocheck(rcu_kthread_task, SCHED_FIFO, &sp);
>
> where RCU_BOOST_PRIO is 1 for the non-boost case.
Good point! Bruno, Sedat, could you please set CONFIG_RCU_BOOST_PRIO to
(say) 50, and see if this still happens? (I bet that you do, but...)
> Is that so low that even the idle thread will take priority? It's a UP
> config with PREEMPT_VOLUNTARY. So pretty much _all_ the stars are
> aligned for odd scheduling behavior.
>
> Other users of SCHED_FIFO tend to set the priority really high (eg
> "MAX_RT_PRIO-1" is clearly the default one - softirq's, watchdog), but
> "1" is not unheard of either (touchscreen/ucb1400_ts and
> mmc/core/sdio_irq), and there are some other random choises out tere.
>
> Any ideas?
I have found one bug so far in my code, but it only affects TREE_RCU
in my -rcu tree, and even then only if HOTPLUG_CPU is enabled. I am
testing a fix, but I expect Sedat's tests to still break.
I gave Sedat a patch that make rcu_kthread() run at normal (non-realtime)
priority, and he did not see the failure. So running non-realtime at
least greatly reduces the probability of failure.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists