[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51275364.3010908@jp.fujitsu.com>
Date: Fri, 22 Feb 2013 20:15:48 +0900
From: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Glauber Costa <glommer@...allels.com>
CC: linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, Pekka Enberg <penberg@...nel.org>
Subject: Re: [PATCH] slub: correctly bootstrap boot caches
(2013/02/22 19:30), Glauber Costa wrote:
> After we create a boot cache, we may allocate from it until it is bootstraped.
> This will move the page from the partial list to the cpu slab list. If this
> happens, the loop:
>
> list_for_each_entry(p, &n->partial, lru)
>
> that we use to scan for all partial pages will yield nothing, and the pages
> will keep pointing to the boot cpu cache, which is of course, invalid. To do
> that, we should flush the cache to make sure that the cpu slab is back to the
> partial list.
>
> Although not verified in practice, I also point out that it is not safe to scan
> the full list only when debugging is on in this case. As unlikely as it is, it
> is theoretically possible for the pages to be full. If they are, they will
> become unreachable. Aside from scanning the full list, we also need to make
> sure that the pages indeed sit in there: the easiest way to do it is to make
> sure the boot caches have the SLAB_STORE_USER debug flag set.
>
> Signed-off-by: Glauber Costa <glommer@...allels.com>
> Reported-by: Steffen Michalke <StMichalke@....de>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
You're quick :) the issue is fixed in my environ.
Tested-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu,com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists