lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140206083849.GS8874@twins.programming.kicks-ass.net>
Date:	Thu, 6 Feb 2014 09:38:49 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Vladimir Davydov <vdavydov@...allels.com>, rientjes@...gle.com,
	akpm@...ux-foundation.org, penberg@...nel.org, cl@...ux.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] slub: fix false-positive lockdep warning in
 free_partial()

On Wed, Feb 05, 2014 at 02:58:37PM -0500, Steven Rostedt wrote:
> On Wed, Feb 05, 2014 at 12:15:33PM +0400, Vladimir Davydov wrote:
> > Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires
> > remove_partial() to be called with n->list_lock held, but free_partial()
> > called from kmem_cache_close() on cache destruction does not follow this
> > rule, leading to a warning:
> > 
> >   WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0()
> >   Modules linked in:
> >   CPU: 0 PID: 2787 Comm: modprobe Tainted: G        W    3.14.0-rc1-mm1+ #1
> >   Hardware name:
> >    0000000000000600 ffff88003ae1dde8 ffffffff816d9583 0000000000000600
> >    0000000000000000 ffff88003ae1de28 ffffffff8107c107 0000000000000000
> >    ffff880037ab2b00 ffff88007c240d30 ffffea0001ee5280 ffffea0001ee52a0
> >   Call Trace:
> >    [<ffffffff816d9583>] dump_stack+0x51/0x6e
> >    [<ffffffff8107c107>] warn_slowpath_common+0x87/0xb0
> >    [<ffffffff8107c145>] warn_slowpath_null+0x15/0x20
> >    [<ffffffff811c7fe2>] __kmem_cache_shutdown+0x1b2/0x1f0
> >    [<ffffffff811908d3>] kmem_cache_destroy+0x43/0xf0
> >    [<ffffffffa013a123>] xfs_destroy_zones+0x103/0x110 [xfs]
> >    [<ffffffffa0192b54>] exit_xfs_fs+0x38/0x4e4 [xfs]
> >    [<ffffffff811036fa>] SyS_delete_module+0x19a/0x1f0
> >    [<ffffffff816dfcd8>] ? retint_swapgs+0x13/0x1b
> >    [<ffffffff810d2125>] ? trace_hardirqs_on_caller+0x105/0x1d0
> >    [<ffffffff81359efe>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> >    [<ffffffff816e8539>] system_call_fastpath+0x16/0x1b
> > 
> > Although this cannot actually result in a race, because on cache
> > destruction there should not be any concurrent frees or allocations from
> > the cache, let's add spin_lock/unlock to free_partial() just to keep
> > lockdep happy.
> 
> Really? We are adding a spin lock for a case where it is not needed just to
> quiet lockdep?

We do that in other places too, its usually init code that thinks to
'know' there is no concurrency, however since its init code its not
performance critical in any way shape or fashion so we just stick to the
'rules' and don't try to play games.

> Now if it really isn't needed, then why don't we do the following instead of
> adding the overhead of taking a lock?
> 
> static inline
> __remove_partial(struct kmem_cache_node *n, struct page *page)
> {
> 	list_del(&page->lru);
> 	n->nr_partial--;
> }
> 
> static inline remove_partial(struct kmem_cache_node *n,
> 			     struct page *page)
> {
> 	lockdep_assert_held(&n->list_lock);
> 	__remove_partial(n, page);
> }
> 
> And then just call __remove_partial() where we don't need to check if the
> lock is held or not with a big comment to it.
> 
> That, IMNSHO, is a much better solution.

I would say yes if there's a performance angle, but in this case you're
increasing the API footprint for an absolute slow path.
kmem_cache_destroy() isn't something we call (or should call) often.

That said, I don't care much, its up to Christoph and Pekka.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ