[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ddb539a-79ed-d992-76cf-061acb4df11e@huaweicloud.com>
Date: Sat, 17 Aug 2024 09:30:58 +0800
From: Xiu Jianfeng <xiujianfeng@...weicloud.com>
To: Kees Cook <kees@...nel.org>, Vlastimil Babka <vbabka@...e.cz>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
Kent Overstreet <kent.overstreet@...ux.dev>, Christoph Lameter
<cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
"GONG, Ruiqi" <gongruiqi@...weicloud.com>, Jann Horn <jannh@...gle.com>,
Matteo Rizzo <matteorizzo@...gle.com>, jvoisin <julien.voisin@...tri.org>,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH 5/5] slab: Allocate and use per-call-site caches
Hi Kees,
On 2024/8/9 15:33, Kees Cook wrote:
> Use separate per-call-site kmem_cache or kmem_buckets. These are
> allocated on demand to avoid wasting memory for unused caches.
>
> A few caches need to be allocated very early to support allocating the
> caches themselves: kstrdup(), kvasprintf(), and pcpu_mem_zalloc(). Any
> GFP_ATOMIC allocations are currently left to be allocated from
> KMALLOC_NORMAL.
>
> With a distro config, /proc/slabinfo grows from ~400 entries to ~2200.
>
> Since this feature (CONFIG_SLAB_PER_SITE) is redundant to
> CONFIG_RANDOM_KMALLOC_CACHES, mark it a incompatible. Add Kconfig help
> text that compares the features.
>
> Improvements needed:
> - Retain call site gfp flags in alloc_tag meta field to:
> - pre-allocate all GFP_ATOMIC caches (since their caches cannot
> be allocated on demand unless we want them to be GFP_ATOMIC
> themselves...)
> - Separate MEMCG allocations as well
> - Allocate individual caches within kmem_buckets on demand to
> further reduce memory usage overhead.
>
> Signed-off-by: Kees Cook <kees@...nel.org>
> ---
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Cc: Kent Overstreet <kent.overstreet@...ux.dev>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Roman Gushchin <roman.gushchin@...ux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
> Cc: linux-mm@...ck.org
> ---
> include/linux/alloc_tag.h | 8 +++
> lib/alloc_tag.c | 121 +++++++++++++++++++++++++++++++++++---
> mm/Kconfig | 19 +++++-
> mm/slab_common.c | 1 +
> mm/slub.c | 31 +++++++++-
> 5 files changed, 170 insertions(+), 10 deletions(-)
>
[...]
> diff --git a/mm/slub.c b/mm/slub.c
> index 3520acaf9afa..d14102c4b4d7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4135,6 +4135,35 @@ void *__kmalloc_large_node_noprof(size_t size, gfp_t flags, int node)
> }
> EXPORT_SYMBOL(__kmalloc_large_node_noprof);
>
> +static __always_inline
> +struct kmem_cache *choose_slab(size_t size, kmem_buckets *b, gfp_t flags,
> + unsigned long caller)
> +{
> +#ifdef CONFIG_SLAB_PER_SITE
> + struct alloc_tag *tag = current->alloc_tag;
There is a compile error here if CONFIG_MEM_ALLOC_PROFILING is disabled
when I test this patchset.
mm/slub.c: In function ‘choose_slab’:
mm/slub.c:4187:40: error: ‘struct task_struct’ has no member named
‘alloc_tag’
4187 | struct alloc_tag *tag = current->alloc_tag;
| ^~
CC mm/page_reporting.o
maybe CONFIG_SLAB_PER_SITE should depend on CONFIG_MEM_ALLOC_PROFILING
> +
> + if (!b && tag && tag->meta.sized &&
> + kmalloc_type(flags, caller) == KMALLOC_NORMAL &&
> + (flags & GFP_ATOMIC) != GFP_ATOMIC) {
> + void *p = READ_ONCE(tag->meta.cache);
> +
> + if (!p && slab_state >= UP) {
> + alloc_tag_site_init(&tag->ct, true);
> + p = READ_ONCE(tag->meta.cache);
> + }
> +
> + if (tag->meta.sized < SIZE_MAX) {
> + if (p)
> + return p;
> + /* Otherwise continue with default buckets. */
> + } else {
> + b = p;
> + }
> + }
> +#endif
> + return kmalloc_slab(size, b, flags, caller);
> +}
> +
> static __always_inline
> void *__do_kmalloc_node(size_t size, kmem_buckets *b, gfp_t flags, int node,
> unsigned long caller)
> @@ -4152,7 +4181,7 @@ void *__do_kmalloc_node(size_t size, kmem_buckets *b, gfp_t flags, int node,
> if (unlikely(!size))
> return ZERO_SIZE_PTR;
>
> - s = kmalloc_slab(size, b, flags, caller);
> + s = choose_slab(size, b, flags, caller);
>
> ret = slab_alloc_node(s, NULL, flags, node, caller, size);
> ret = kasan_kmalloc(s, ret, size, flags);
Powered by blists - more mailing lists