[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1186417445.6616.23.camel@localhost>
Date: Mon, 06 Aug 2007 12:24:05 -0400
From: Trond Myklebust <trond.myklebust@....uio.no>
To: Marc Dietrich <Marc.Dietrich@...physik.uni-giessen.de>
Cc: Johannes Berg <johannes@...solutions.net>,
Oleg Nesterov <oleg@...sign.ru>,
Andrew Morton <akpm@...ux-foundation.org>,
Neil Brown <neilb@...e.de>, nfs@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [NFS] 2.6.23-rc1-mm2
On Mon, 2007-08-06 at 13:05 +0200, Marc Dietrich wrote:
> Hi,
>
> Am Monday 06 August 2007 08:24 schrieb Johannes Berg:
> > On Fri, 2007-08-03 at 21:21 +0400, Oleg Nesterov wrote:
> > > To avoid a possible confusion: it is still OK if work->func() flushes
> > > its own workqueue, so strictly speaking this trace is false positive,
> > > but it would be very nice if we can get rid of this practice.
> >
> > I just had a thought: we could get rid of this warning by using a
> > read-lock here. That way, flushing from within a work function (which
> > would be seen as read-after-read recursive lock) won't trigger this
> > warning. Patch below. This would, however, also get rid of any warnings
> > for run_workqueue recursion. Which again we may or may not want, the
> > code inidicates that it should be allowed up to a depth of three.
> >
> > However, the question whether we should allow flush_workqueue from
> > within a struct work is mainly an API policy issue; it doesn't hurt to
> > flush a workqueue from within a work, but it is probably nearer the
> > intent to use targeted cancel_work_sync() or such. OTOH, one could
> > imagine situations where multiple different work structs are on that
> > workqueue belonging to the same subsystem and then the general
> > flush_scheduled_work() call is the only way to guarantee nothing is on
> > scheduled at a given point... I don't feel qualified to make the
> > decision for or against allowing this use of the API at this point.
> >
> > Marc, do you have an easy way to trigger this warning? Could you verify
> > that it goes away with the patch below applied?
>
> just booting into X is enough.
>
> I applied the patch, but now I get:
>
> =================================
> [ INFO: inconsistent lock state ]
> 2.6.23-rc1-mm2 #4
> ---------------------------------
> inconsistent {softirq-on-W} -> {in-softirq-W} usage.
> swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
> (rpc_credcache_lock){-+..}, at: [<c01dc487>] _atomic_dec_and_lock+0x17/0x60
> {softirq-on-W} state was registered at:
> [<c013e870>] __lock_acquire+0x650/0x1030
> [<c013f2b1>] lock_acquire+0x61/0x80
> [<c02db9ac>] _spin_lock+0x2c/0x40
> [<c01dc487>] _atomic_dec_and_lock+0x17/0x60
> [<dced55fd>] put_rpccred+0x5d/0x100 [sunrpc]
> [<dced56c1>] rpcauth_unbindcred+0x21/0x60 [sunrpc]
> [<dced3fd4>] a0 [sunrpc]
> [<dcecefe0>] rpc_call_sync+0x30/0x40 [sunrpc]
> [<dcedc73b>] rpcb_register+0xdb/0x180 [sunrpc]
> [<dced65b3>] svc_register+0x93/0x160 [sunrpc]
> [<dced6ebe>] __svc_create+0x1ee/0x220 [sunrpc]
> [<dced7053>] svc_create+0x13/0x20 [sunrpc]
> [<dcf6d722>] nfs_callback_up+0x82/0x120 [nfs]
> [<dcf48f36>] nfs_get_client+0x176/0x390 [nfs]
> [<dcf49181>] nfs4_set_client+0x31/0x190 [nfs]
> [<dcf49983>] nfs4_create_server+0x63/0x3b0 [nfs]
> [<dcf52426>] nfs4_get_sb+0x346/0x5b0 [nfs]
> [<c017b444>] vfs_kern_mount+0x94/0x110
> [<c0190a62>] do_mount+0x1f2/0x7d0
> [<c01910a6>] sys_mount+0x66/0xa0
> [<c0104046>] syscall_call+0x7/0xb
> [<ffffffff>] 0xffffffff
> irq event stamp: 5277830
> hardirqs last enabled at (5277830): [<c017530a>] kmem_cache_free+0x8a/0xc0
> hardirqs last disabled at (5277829): [<c01752d2>] kmem_cache_free+0x52/0xc0
> softirqs last enabled at (5277798): [<c0124173>] __do_softirq+0xa3/0xc0
> softirqs last disabled at (5277817): [<c01241d7>] do_softirq+0x47/0x50
>
> other info that might help us debug this:
> no locks held by swapper/0.
>
> stack backtrace:
> [<c0104fda>] show_trace_log_lvl+0x1a/0x30
> [<c0105c02>] show_trace+0x12/0x20
> [<c0105d15>] dump_stack+0x15/0x20
> [<c013ccc3>] print_usage_bug+0x153/0x160
> [<c013d8b9>] mark_lock+0x449/0x620
> [<c013e824>] __lock_acquire+0x604/0x1030
> [<c013f2b1>] lock_acquire+0x61/0x80
> [<c02db9ac>] _spin_lock+0x2c/0x40
> [<c01dc487>] _atomic_dec_and_lock+0x17/0x60
> [<dced55fd>] put_rpccred+0x5d/0x100 [sunrpc]
> [<dcf6bf83>] nfs_free_delegation_callback+0x13/0x20 [nfs]
> [<c012f9ea>] __rcu_process_callbacks+0x6a/0x1c0
> [<c012fb52>] rcu_process_callbacks+0x12/0x30
> [<c0124218>] tasklet_action+0x38/0x80
> [<c0124125>] __do_softirq+0x55/0xc0
> [<c01241d7>] do_softirq+0x47/0x50
> [<c0124605>] irq_exit+0x35/0x40
> [<c0112463>] smp_apic_timer_interrupt+0x43/0x80
> [<c0104a77>] apic_timer_interrupt+0x33/0x38
> [<c02690df>] cpuidle_idle_call+0x6f/0x90
> [<c01023c3>] cpu_idle+0x43/0x70
> [<c02d8c27>] rest_init+0x47/0x50
> [<c03bcb6a>] start_kernel+0x22a/0x2b0
> [<00000000>] 0x0
> =======================
That is a different matter. I assume this patch should suffice to fix
the above problem.
Trond
Download attachment "linux-2.6.23-005-dont_call_rpc_putcred_from_rcu.dif" of type "message/rfc822" (2129 bytes)
Powered by blists - more mailing lists