[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242289830.21646.5.camel@penberg-laptop>
Date: Thu, 14 May 2009 11:30:30 +0300
From: Pekka Enberg <penberg@...helsinki.fi>
To: Minchan Kim <minchan.kim@...il.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Nick Piggin <npiggin@...e.de>
Subject: Re: kernel BUG at mm/slqb.c:1411!
On Wed, 2009-05-13 at 17:37 +0900, Minchan Kim wrote:
> On Wed, 13 May 2009 16:42:37 +0900 (JST)
> KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:
>
> Hmm. I don't know slqb well.
> So, It's just my guess.
>
> We surely increase l->nr_partial in __slab_alloc_page.
> In between l->nr_partial++ and call __cache_list_get_page, Who is decrease l->nr_partial again.
> After all, __cache_list_get_page return NULL and hit the VM_BUG_ON.
>
> Comment said :
>
> /* Protects nr_partial, nr_slabs, and partial */
> spinlock_t page_lock;
>
> As comment is right, We have to hold the l->page_lock ?
Makes sense. Nick? Motohiro-san, can you try this patch please?
Pekka
diff --git a/mm/slqb.c b/mm/slqb.c
index 5d0642f..29bb005 100644
--- a/mm/slqb.c
+++ b/mm/slqb.c
@@ -1399,12 +1399,14 @@ static noinline void *__slab_alloc_page(struct kmem_cache *s,
page->list = l;
spin_lock(&n->list_lock);
+ spin_lock(&l->page_lock);
l->nr_slabs++;
l->nr_partial++;
list_add(&page->lru, &l->partial);
slqb_stat_inc(l, ALLOC);
slqb_stat_inc(l, ALLOC_SLAB_NEW);
object = __cache_list_get_page(s, l);
+ spin_unlock(&l->page_lock);
spin_unlock(&n->list_lock);
#endif
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists