[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1314910332.1485.14.camel@twins>
Date: Thu, 01 Sep 2011 22:52:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Fernando Lopez-Lezcano <nando@...ma.Stanford.EDU>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...ibm.com>, efault@....de
Subject: Re: 3.0.4 + rt12: deadlock
On Thu, 2011-09-01 at 21:38 +0200, Thomas Gleixner wrote:
> > =============================================
> > [ INFO: possible recursive locking detected ]
> > 3.0.4-1.rt12.1.fc14.ccrma.i686.rtPAE #1
> > ---------------------------------------------
> > swapper/0 is trying to acquire lock:
> > (&parent->list_lock){+.+...}, at: [<c05054ce>]
> > __cache_free.clone.27+0x45/0xc4
> >
> > but task is already holding lock:
> > (&parent->list_lock){+.+...}, at: [<c050662c>] do_tune_cpucache
> +0xf0/0x2b0
> >
> > other info that might help us debug this:
> > Possible unsafe locking scenario:
> >
> > CPU0
> > ----
> > lock(&parent->list_lock);
> > lock(&parent->list_lock);
>
>
>
> > *** DEADLOCK ***
> >
> > May be due to missing lock nesting notation
> >
> > 3 locks held by swapper/0:
> > #0: (cache_chain_mutex){+.+...}, at: [<c0bedb96>]
> > kmem_cache_init_late+0x15/0x61
> > #1: (&per_cpu(slab_lock, __cpu).lock){+.+...}, at: [<c0504a53>]
> > __local_lock_irq+0x1e/0x5b
> > #2: (&parent->list_lock){+.+...}, at: [<c050662c>]
> > do_tune_cpucache+0xf0/0x2b0
> That's something which has to do with debugging options (debugobjects
> IIRC). There was some attempt to fix that, but that might have gone
> lost in my vacation and the following futile attempt to take care of
> the resulting backlog. Peter ???
Looks like the one supposedly cured by:
patches/peter_zijlstra-slab_lockdep-annotate_the_locks_before_using.patch
which should be in -rt12
will have a peek, never reproduced for me though..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists