[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000013e8f295277-653bcbfa-e1cc-4c05-8e6d-eb6a5a661f6f-000000@email.amazonses.com>
Date: Fri, 10 May 2013 15:57:30 +0000
From: Christoph Lameter <cl@...ux.com>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
cc: glommer@...allels.com, penberg@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: [linux-next-20130422] Bug in SLAB?
On Fri, 10 May 2013, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > Can we manage with allocating only 26 elements when MAX_ORDER + PAGE_SHIFT > 26
> > (e.g. PAGE_SIZE == 256 * 1024) ?
> >
> > Can kmalloc_index()/kmalloc_size()/kmalloc_slab() etc. work correctly when
> > MAX_ORDER + PAGE_SHIFT > 26 (e.g. PAGE_SIZE == 256 * 1024) ?
> >
> Today I compared SLAB/SLUB code. If I understood correctly, the line
>
> if (size <= 64 * 1024 * 1024) return 26;
>
> in kmalloc_index() is redundant (in fact, kmalloc_caches[26] is out of range)
> and conflicts with what the comment
True we could remove it but it does not hurt. There is a bounding of size
before any call to kmalloc_index.
> * The largest kmalloc size supported by the SLAB allocators is
> * 32 megabyte (2^25) or the maximum allocatable page order if that is
> * less than 32 MB.
>
> says, and 0 <= kmalloc_index() <= 25 is always true for SLAB and
> 0 <= kmalloc_index() <= PAGE_SHIFT+1 is always true for SLUB.
>
> Therefore, towards 3.10-rc1,
>
> > > - for (i = 1; i < PAGE_SHIFT + MAX_ORDER; i++) {
> > > + for (i = 1; i =< KMALLOC_SHIFT_HIGH; i++) {
> >
> -+ for (i = 1; i =< KMALLOC_SHIFT_HIGH; i++) {
> ++ for (i = 1; i <= KMALLOC_SHIFT_HIGH; i++) {
>
> would be the last fix for me. (I don't know why kmalloc_caches[0] is excluded.)
Yep. kmalloc[0] is not used. The first cache to be used is 1 and 2 which
are the non power of two caches. 3 and higher are the power of two caches.
Subject: SLAB: Fix init_lock_keys
init_lock_keys goes too far in initializing values in kmalloc_caches because
it assumed that the size of the kmalloc array goes up to MAX_ORDER. However, the size
of the kmalloc array for SLAB may be restricted due to increased page sizes or CONFIG_FORCE_MAX_ZONEORDER.
Reported-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Signed-off-by: Christoph Lameter <cl@...ux.com>
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c 2013-05-09 09:06:20.000000000 -0500
+++ linux/mm/slab.c 2013-05-09 09:08:08.338606055 -0500
@@ -565,7 +565,7 @@ static void init_node_lock_keys(int q)
if (slab_state < UP)
return;
- for (i = 1; i < PAGE_SHIFT + MAX_ORDER; i++) {
+ for (i = 1; i <= KMALLOC_SHIFT_HIGH; i++) {
struct kmem_cache_node *n;
struct kmem_cache *cache = kmalloc_caches[i];
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists