[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y0vKd8/9lrI8T+Wk@hyeyoo>
Date: Sun, 16 Oct 2022 18:10:15 +0900
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Guenter Roeck <linux@...ck-us.net>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 10/17] mm/slab: kmalloc: pass requests larger than
order-1 page to page allocator
On Sat, Oct 15, 2022 at 09:39:08PM +0200, Vlastimil Babka wrote:
> On 10/15/22 01:48, Hyeonggon Yoo wrote:
> > On Fri, Oct 14, 2022 at 01:58:18PM -0700, Guenter Roeck wrote:
> >> Hi,
> >>
> >> On Wed, Aug 17, 2022 at 07:18:19PM +0900, Hyeonggon Yoo wrote:
> >> > There is not much benefit for serving large objects in kmalloc().
> >> > Let's pass large requests to page allocator like SLUB for better
> >> > maintenance of common code.
> >> >
> >> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> >> > Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> >> > ---
> >>
> >> This patch results in a WARNING backtrace in all mips and sparc64
> >> emulations.
> >>
> >> ------------[ cut here ]------------
> >> WARNING: CPU: 0 PID: 0 at mm/slab_common.c:729 kmalloc_slab+0xc0/0xdc
> >> Modules linked in:
> >> CPU: 0 PID: 0 Comm: swapper Not tainted 6.0.0-11990-g9c9155a3509a #1
> >> Stack : ffffffff 801b2a18 80dd0000 00000004 00000000 00000000 81023cd4 00000000
> >> 81040000 811a9930 81040000 8104a628 81101833 00000001 81023c78 00000000
> >> 00000000 00000000 80f5d858 81023b98 00000001 00000023 00000000 ffffffff
> >> 00000000 00000064 00000002 81040000 81040000 00000001 80f5d858 000002d9
> >> 00000000 00000000 80000000 80002000 00000000 00000000 00000000 00000000
> >> ...
> >> Call Trace:
> >> [<8010a2bc>] show_stack+0x38/0x118
> >> [<80cf5f7c>] dump_stack_lvl+0xac/0x104
> >> [<80130d7c>] __warn+0xe0/0x224
> >> [<80cdba5c>] warn_slowpath_fmt+0x64/0xb8
> >> [<8028c058>] kmalloc_slab+0xc0/0xdc
> >>
> >> irq event stamp: 0
> >> hardirqs last enabled at (0): [<00000000>] 0x0
> >> hardirqs last disabled at (0): [<00000000>] 0x0
> >> softirqs last enabled at (0): [<00000000>] 0x0
> >> softirqs last disabled at (0): [<00000000>] 0x0
> >> ---[ end trace 0000000000000000 ]---
> >>
> >> Guenter
> >
> > Hi.
> >
> > Thank you so much for this report!
> >
> > Hmm so SLAB tries to find kmalloc cache for freelist index array using
> > kmalloc_slab() directly, and it becomes problematic when size of the
> > array is larger than PAGE_SIZE * 2.
>
> Hmm interesting, did you find out how exactly that can happen in practice,
> or what's special about mips and sparc64 here?
IIUC if page size is large, number of objects per slab is quite large and so
the possiblity of failing to use objfreelist slab is higher, and then it
tries to use off slab.
> Because normally
> calculate_slab_order() will only go up to slab_max_order, which AFAICS can
> only go up to SLAB_MAX_ORDER_HI, thus 1, unless there's a boot command line
> override.
AFAICS with mips default configuration and without setting slab_max_order,
It seems SLAB actually does not use too big freelist index array.
But it hits the warning because of tricky logic.
For example if the condition is true on
> if (freelist_cache->size > cachep->size / 2)
> continue;
or on (before kmalloc is up, in case of kmem_cache)
> freelist_cache = kmalloc_slab(freelist_size, 0u);
> if (!freelist_cache)
> continue;
it increases gfporder over and over until 'num' becomes larger than SLAB_MAX_OBJS.
(regardless of slab_max_order).
I think adding below would be more robust.
diff --git a/mm/slab.c b/mm/slab.c
index d1f6e2c64c2e..1321aca1887c 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1679,7 +1679,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
} else {
freelist_cache = kmalloc_slab(freelist_size, 0u);
if (!freelist_cache)
- continue;
+ break;
freelist_cache_size = freelist_cache->size;
/*
@@ -1692,7 +1692,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
/* check if off slab has enough benefit */
if (freelist_cache_size > cachep->size / 2)
- continue;
+ break;
}
/* Found something acceptable - save it away */
> And if we have two pages for objects, surely even with small objects they
> can't be smaller than freelist_idx_t, so if the number of objects fits into
> two pages (order 1), then the freelist array should also fit in two pages?
That's right but on certain condition it seem to go larger than slab_max_order.
(from code inspection)
>
> Thanks,
> Vlastimil
>
> > Will send a fix soon.
> >
--
Thanks,
Hyeonggon
Powered by blists - more mailing lists