[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1154118947.10196.10.camel@localhost.localdomain>
Date: Fri, 28 Jul 2006 22:35:47 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Christoph Lameter <clameter@....com>
Cc: Pekka Enberg <penberg@...helsinki.fi>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Arjan van de Ven <arjan@...radead.org>,
alokk@...softinc.com
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free
On Fri, 2006-07-28 at 22:27 +0200, Thomas Gleixner wrote:
> > Why does _spin_lock call kmem_cache_free?
> >
> > Is the stack trace an accurate representation of the calling sequence?
>
> Probably not. Thats a call tail optimization artifact. I retest with
> UNWIND info =y
[ 125.530611] input: AT Translated Set 2 keyboard as /class/input/input0
[ 128.510384]
[ 128.510386] =============================================
[ 128.512303] [ INFO: possible recursive locking detected ]
[ 128.512415] ---------------------------------------------
[ 128.512527] events/0/5 is trying to acquire lock:
[ 128.512636] (&nc->lock){.+..}, at: [<ffffffff802908c1>] kmem_cache_free+0x141/0x210
[ 128.513024]
[ 128.513025] but task is already holding lock:
[ 128.513225] (&nc->lock){.+..}, at: [<ffffffff80291bc1>] cache_reap+0xd1/0x290
[ 128.513608]
[ 128.513609] other info that might help us debug this:
[ 128.513813] 3 locks held by events/0/5:
[ 128.513917] #0: (cache_chain_mutex){--..}, at: [<ffffffff80291b18>] cache_reap+0x28/0x290
[ 128.514362] #1: (&nc->lock){.+..}, at: [<ffffffff80291bc1>] cache_reap+0xd1/0x290
[ 128.514803] #2: (&parent->list_lock){.+..}, at: [<ffffffff80290c32>] __drain_alien_cache+0x42/0x90
[ 128.515253]
[ 128.515253] stack backtrace:
[ 128.515448]
[ 128.515449] Call Trace:
[ 128.515740] [<ffffffff8020b75e>] show_trace+0xae/0x280
[ 128.515864] [<ffffffff8020bb75>] dump_stack+0x15/0x20
[ 128.515987] [<ffffffff8025447c>] __lock_acquire+0x8cc/0xcb0
[ 128.516178] [<ffffffff80254b82>] lock_acquire+0x52/0x70
[ 128.516369] [<ffffffff804a8374>] _spin_lock+0x34/0x50
[ 128.516562] [<ffffffff802908c1>] kmem_cache_free+0x141/0x210
[ 128.516799] [<ffffffff80290a48>] slab_destroy+0xb8/0xf0
[ 128.517034] [<ffffffff80290b88>] free_block+0x108/0x170
[ 128.517270] [<ffffffff80290c58>] __drain_alien_cache+0x68/0x90
[ 128.517509] [<ffffffff80291d5f>] cache_reap+0x26f/0x290
[ 128.517745] [<ffffffff80249b93>] run_workqueue+0xc3/0x120
[ 128.517926] [<ffffffff80249e11>] worker_thread+0x121/0x160
[ 128.518105] [<ffffffff8024d93a>] kthread+0xda/0x110
[ 128.518286] [<ffffffff8020af2a>] child_rip+0x8/0x12
ffffffff80290c32 /home/tglx/work/kernel/git/linux-2.6.git/mm/slab.c:1012
ffffffff80291bc1 /home/tglx/work/kernel/git/linux-2.6.git/mm/slab.c:1031
ffffffff802908c1 /home/tglx/work/kernel/git/linux-2.6.git/mm/slab.c:1074
ffffffff80291b18 /home/tglx/work/kernel/git/linux-2.6.git/mm/slab.c:3749
Let me know, if you need more info
tglx
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists