[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56DEF08D.607@suse.cz>
Date: Tue, 8 Mar 2016 16:32:29 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Vineet Gupta <Vineet.Gupta1@...opsys.com>, linux-mm@...ck.org
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Noam Camus <noamc@...hip.com>, stable@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-snps-arc@...ts.infradead.org
Subject: Re: [PATCH] mm: slub: Ensure that slab_unlock() is atomic
On 03/08/2016 03:30 PM, Vineet Gupta wrote:
> We observed livelocks on ARC SMP setup when running hackbench with SLUB.
> This hardware configuration lacks atomic instructions (LLOCK/SCOND) thus
> kernel resorts to a central @smp_bitops_lock to protect any R-M-W ops
> suh as test_and_set_bit()
Sounds like this architecture should then redefine __clear_bit_unlock
and perhaps other non-atomic __X_bit() variants to be atomic, and not
defer this requirement to places that use the API?
> The spinlock itself is implemented using Atomic [EX]change instruction
> which is always available.
>
> The race happened when both cores tried to slab_lock() the same page.
>
> c1 c0
> ----------- -----------
> slab_lock
> slab_lock
> slab_unlock
> Not observing the unlock
>
> This in turn happened because slab_unlock() doesn't serialize properly
> (doesn't use atomic clear) with a concurrent running
> slab_lock()->test_and_set_bit()
>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Noam Camus <noamc@...hip.com>
> Cc: <stable@...r.kernel.org>
> Cc: <linux-mm@...ck.org>
> Cc: <linux-kernel@...r.kernel.org>
> Cc: <linux-snps-arc@...ts.infradead.org>
> Signed-off-by: Vineet Gupta <vgupta@...opsys.com>
> ---
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index d8fbd4a6ed59..b7d345a508dc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -345,7 +345,7 @@ static __always_inline void slab_lock(struct page *page)
> static __always_inline void slab_unlock(struct page *page)
> {
> VM_BUG_ON_PAGE(PageTail(page), page);
> - __bit_spin_unlock(PG_locked, &page->flags);
> + bit_spin_unlock(PG_locked, &page->flags);
> }
>
> static inline void set_page_slub_counters(struct page *page, unsigned long counters_new)
>
Powered by blists - more mailing lists