[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54B51CD2.6050104@oracle.com>
Date: Tue, 13 Jan 2015 08:25:38 -0500
From: Sasha Levin <sasha.levin@...cle.com>
To: Jeff Layton <jeff.layton@...marydata.com>
CC: LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: fs: locks: WARNING: CPU: 16 PID: 4296 at fs/locks.c:236 locks_free_lock_context+0x10d/0x240()
On 01/13/2015 08:20 AM, Jeff Layton wrote:
> On Tue, 13 Jan 2015 00:11:37 -0500
> Sasha Levin <sasha.levin@...cle.com> wrote:
>
>> Hey Jeff,
>>
>> While fuzzing with trinity inside a KVM tools guest running the latest -next
>> kernel, I've stumbled on the following spew:
>>
>> [ 887.078606] WARNING: CPU: 16 PID: 4296 at fs/locks.c:236 locks_free_lock_context+0x10d/0x240()
>> [ 887.079703] Modules linked in:
>> [ 887.080288] CPU: 16 PID: 4296 Comm: trinity-c273 Not tainted 3.19.0-rc4-next-20150112-sasha-00053-g23c147e02e-dirty #1710
>> [ 887.082229] 0000000000000000 0000000000000000 0000000000000000 ffff8804c9f4f8e8
>> [ 887.083773] ffffffff9154e0a6 0000000000000000 ffff8804cad98000 ffff8804c9f4f938
>> [ 887.085280] ffffffff8140a4d0 0000000000000001 ffffffff81bf0d2d ffff8804c9f4f988
>> [ 887.086792] Call Trace:
>> [ 887.087320] dump_stack (lib/dump_stack.c:52)
>> [ 887.088247] warn_slowpath_common (kernel/panic.c:447)
>> [ 887.089342] ? locks_free_lock_context (fs/locks.c:236 (discriminator 3))
>> [ 887.090514] warn_slowpath_null (kernel/panic.c:481)
>> [ 887.091629] locks_free_lock_context (fs/locks.c:236 (discriminator 3))
>> [ 887.092782] __destroy_inode (fs/inode.c:243)
>> [ 887.093817] destroy_inode (fs/inode.c:268)
>> [ 887.094833] evict (fs/inode.c:574)
>> [ 887.095808] iput (fs/inode.c:1503)
>> [ 887.096687] __dentry_kill (fs/dcache.c:323 fs/dcache.c:508)
>> [ 887.097683] ? _raw_spin_trylock (kernel/locking/spinlock.c:136)
>> [ 887.098733] ? dput (fs/dcache.c:545 fs/dcache.c:648)
>> [ 887.099672] dput (fs/dcache.c:649)
>> [ 887.100552] __fput (fs/file_table.c:227)
>> [ 887.101437] ____fput (fs/file_table.c:245)
>> [ 887.102317] task_work_run (kernel/task_work.c:125 (discriminator 1))
>> [ 887.103356] ? _raw_spin_unlock (./arch/x86/include/asm/preempt.h:95 include/linux/spinlock_api_smp.h:154 kernel/locking/spinlock.c:183)
>> [ 887.104390] do_exit (kernel/exit.c:746)
>> [ 887.105338] ? task_numa_work (kernel/sched/fair.c:2218)
>> [ 887.106384] ? get_signal (kernel/signal.c:2207)
>> [ 887.107492] ? _raw_spin_unlock_irq (./arch/x86/include/asm/paravirt.h:819 include/linux/spinlock_api_smp.h:170 kernel/locking/spinlock.c:199)
>> [ 887.108610] do_group_exit (include/linux/sched.h:775 kernel/exit.c:858)
>> [ 887.109613] get_signal (kernel/signal.c:2358)
>> [ 887.110578] ? trace_hardirqs_off (kernel/locking/lockdep.c:2671)
>> [ 887.111672] do_signal (arch/x86/kernel/signal.c:703)
>> [ 887.112604] ? acct_account_cputime (kernel/tsacct.c:168)
>> [ 887.113722] ? context_tracking_user_exit (./arch/x86/include/asm/paravirt.h:809 (discriminator 2) kernel/context_tracking.c:144 (discriminator 2))
>> [ 887.114937] ? trace_hardirqs_on_caller (kernel/locking/lockdep.c:2578 kernel/locking/lockdep.c:2625)
>> [ 887.116128] ? trace_hardirqs_on (kernel/locking/lockdep.c:2633)
>> [ 887.117160] do_notify_resume (arch/x86/kernel/signal.c:750)
>> [ 887.118167] int_signal (arch/x86/kernel/entry_64.S:587)
>>
>>
>
> Huh...
>
> Looks like the flc_flock list wasn't empty when we went to go free the
> inode. I don't see how that would happen right offhand, but I'll keep
> looking at it.
>
> Just to be clear -- were there any "exotic" filesystems involved here?
Probably. Part of the preparation for the fuzzer is to mount every single filesystem
we can and use it as scratch space for fuzzing.
> In particular any that define a ->flock inode operation (e.g. NFS) ?
There are a few. Since this bug is pretty reproducible I'll add in something to
dump the lock when that happens and see if we can get anything out of it.
Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists