[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eppzf5gil6izcn6nnvzgzukagdikqnfxdvziga7ipnpl5eeern@i7jfzslklsu6>
Date: Mon, 25 Mar 2024 15:40:51 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Kees Cook <keescook@...omium.org>
Cc: Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
"GONG, Ruiqi" <gongruiqi@...weicloud.com>, Xiu Jianfeng <xiujianfeng@...wei.com>,
Suren Baghdasaryan <surenb@...gle.com>, Jann Horn <jannh@...gle.com>,
Matteo Rizzo <matteorizzo@...gle.com>, linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v2 4/9] slab: Introduce kmem_buckets_create()
On Tue, Mar 05, 2024 at 02:10:20AM -0800, Kees Cook wrote:
> Dedicated caches are available For fixed size allocations via
> kmem_cache_alloc(), but for dynamically sized allocations there is only
> the global kmalloc API's set of buckets available. This means it isn't
> possible to separate specific sets of dynamically sized allocations into
> a separate collection of caches.
>
> This leads to a use-after-free exploitation weakness in the Linux
> kernel since many heap memory spraying/grooming attacks depend on using
> userspace-controllable dynamically sized allocations to collide with
> fixed size allocations that end up in same cache.
>
> While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
> against these kinds of "type confusion" attacks, including for fixed
> same-size heap objects, we can create a complementary deterministic
> defense for dynamically sized allocations.
>
> In order to isolate user-controllable sized allocations from system
> allocations, introduce kmem_buckets_create(), which behaves like
> kmem_cache_create(). (The next patch will introduce kmem_buckets_alloc(),
> which behaves like kmem_cache_alloc().)
>
> Allows for confining allocations to a dedicated set of sized caches
> (which have the same layout as the kmalloc caches).
>
> This can also be used in the future once codetag allocation annotations
> exist to implement per-caller allocation cache isolation[1] even for
> dynamic allocations.
>
> Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1]
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Roman Gushchin <roman.gushchin@...ux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
> Cc: linux-mm@...ck.org
> ---
> include/linux/slab.h | 5 +++
> mm/slab_common.c | 72 ++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 77 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index f26ac9a6ef9f..058d0e3cd181 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -493,6 +493,11 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru,
> gfp_t gfpflags) __assume_slab_alignment __malloc;
> void kmem_cache_free(struct kmem_cache *s, void *objp);
>
> +kmem_buckets *kmem_buckets_create(const char *name, unsigned int align,
> + slab_flags_t flags,
> + unsigned int useroffset, unsigned int usersize,
> + void (*ctor)(void *));
I'd prefer an API that initialized an object over one that allocates it
- that is, prefer
kmem_buckets_init(kmem_buckets *bucekts, ...)
by forcing it to be separately allocated, you're adding a pointer deref
to every access.
That would also allow for kmem_buckets to be lazily initialized, which
would play nicely declaring the kmem_buckets in the alloc_hooks() macro.
I'm curious what all the arguments to kmem_buckets_create() are needed
for, if this is supposed to be a replacement for kmalloc() users.
Powered by blists - more mailing lists