[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20221015114733.GA2931132@roeck-us.net>
Date: Sat, 15 Oct 2022 04:47:33 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t
array allocation
On Sat, Oct 15, 2022 at 01:34:29PM +0900, Hyeonggon Yoo wrote:
> After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
> order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
> requests to buddy like SLUB does.
>
> SLAB has been using kmalloc caches to allocate freelist_idx_t array for
> off slab caches. But after the commit, freelist_size can be bigger than
> KMALLOC_MAX_CACHE_SIZE.
>
> Instead of using pointer to kmalloc cache, use kmalloc_node() and only
> check if the kmalloc cache is off slab during calculate_slab_order().
> If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
> as it allocates freelist_idx_t array directly from buddy.
>
> Reported-by: Guenter Roeck <linux@...ck-us.net>
> Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> ---
>
> @Guenter:
> This fixes the issue on my emulation.
> Can you please test this on your environment?
Yes, that fixes the problem for me.
Tested-by: Guenter Roeck <linux@...ck-us.net>
Thanks,
Guenter
>
> include/linux/slab_def.h | 1 -
> mm/slab.c | 37 +++++++++++++++++++------------------
> 2 files changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index e24c9aff6fed..f0ffad6a3365 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -33,7 +33,6 @@ struct kmem_cache {
>
> size_t colour; /* cache colouring range */
> unsigned int colour_off; /* colour offset */
> - struct kmem_cache *freelist_cache;
> unsigned int freelist_size;
>
> /* constructor func */
> diff --git a/mm/slab.c b/mm/slab.c
> index a5486ff8362a..d1f6e2c64c2e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab)
> * although actual page can be freed in rcu context
> */
> if (OFF_SLAB(cachep))
> - kmem_cache_free(cachep->freelist_cache, freelist);
> + kfree(freelist);
> }
>
> /*
> @@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
> if (flags & CFLGS_OFF_SLAB) {
> struct kmem_cache *freelist_cache;
> size_t freelist_size;
> + size_t freelist_cache_size;
>
> freelist_size = num * sizeof(freelist_idx_t);
> - freelist_cache = kmalloc_slab(freelist_size, 0u);
> - if (!freelist_cache)
> - continue;
> -
> - /*
> - * Needed to avoid possible looping condition
> - * in cache_grow_begin()
> - */
> - if (OFF_SLAB(freelist_cache))
> - continue;
> + if (freelist_size > KMALLOC_MAX_CACHE_SIZE) {
> + freelist_cache_size = PAGE_SIZE << get_order(freelist_size);
> + } else {
> + freelist_cache = kmalloc_slab(freelist_size, 0u);
> + if (!freelist_cache)
> + continue;
> + freelist_cache_size = freelist_cache->size;
> +
> + /*
> + * Needed to avoid possible looping condition
> + * in cache_grow_begin()
> + */
> + if (OFF_SLAB(freelist_cache))
> + continue;
> + }
>
> /* check if off slab has enough benefit */
> - if (freelist_cache->size > cachep->size / 2)
> + if (freelist_cache_size > cachep->size / 2)
> continue;
> }
>
> @@ -2061,11 +2067,6 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
> cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
> #endif
>
> - if (OFF_SLAB(cachep)) {
> - cachep->freelist_cache =
> - kmalloc_slab(cachep->freelist_size, 0u);
> - }
> -
> err = setup_cpu_cache(cachep, gfp);
> if (err) {
> __kmem_cache_release(cachep);
> @@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
> freelist = NULL;
> else if (OFF_SLAB(cachep)) {
> /* Slab management obj is off-slab. */
> - freelist = kmem_cache_alloc_node(cachep->freelist_cache,
> + freelist = kmalloc_node(cachep->freelist_size,
> local_flags, nodeid);
> } else {
> /* We will use last bytes at the slab for freelist */
> --
> 2.32.0
Powered by blists - more mailing lists