[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240807235433.work.317-kees@kernel.org>
Date: Wed, 7 Aug 2024 16:54:41 -0700
From: Kees Cook <kees@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Kees Cook <kees@...nel.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
"Gustavo A . R . Silva" <gustavoars@...nel.org>,
Bill Wendling <morbo@...gle.com>,
Justin Stitt <justinstitt@...gle.com>,
Jann Horn <jannh@...gle.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Marco Elver <elver@...gle.com>,
linux-mm@...ck.org,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
linux-kernel@...r.kernel.org,
llvm@...ts.linux.dev,
linux-hardening@...r.kernel.org
Subject: [PATCH] slab: Introduce kmalloc_obj() and family
Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:
ptr = kmalloc(sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc_array(count, sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
These become, respectively:
kmalloc_obj(p, gfp);
kzalloc_obj(p, count, gfp);
kmalloc_obj(p, count, gfp);
kzalloc_obj(p, count, gfp);
kmalloc_obj(p, flex_member, count, gfp);
These each return the size of the allocation, so that other common
idioms can be converted easily as well. For example:
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
becomes:
info->size = kmalloc_obj(ptr, flex_member, count, gfp);
Internal introspection of allocated type also becomes possible, allowing
for alignment-aware choices and future hardening work. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.
Additionally, once __builtin_get_counted_by() is added by GCC[1] and
Clang[2], it will be possible to automatically set the counted member of
a struct with a counted_by FAM, further eliminating open-coded redundant
initializations, and can internally check for "too large" allocations
based on the type size of the counter variable:
if (count > type_max(ptr->flex_count))
fail...;
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
ptr->flex_count = count;
becomes (i.e. unchanged from earlier example):
info->size = kmalloc_obj(ptr, flex_member, count, gfp);
Replacing all existing simple code patterns found via Coccinelle[3]
shows what could be replaced immediately (saving roughly 1,500 lines):
7040 files changed, 14128 insertions(+), 15557 deletions(-)
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116016 [1]
Link: https://github.com/llvm/llvm-project/issues/99774 [2]
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci [3]
Signed-off-by: Kees Cook <kees@...nel.org>
---
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: Gustavo A. R. Silva <gustavoars@...nel.org>
Cc: Bill Wendling <morbo@...gle.com>
Cc: Justin Stitt <justinstitt@...gle.com>
Cc: Jann Horn <jannh@...gle.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Marco Elver <elver@...gle.com>
Cc: linux-mm@...ck.org
---
include/linux/slab.h | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index eb2bf4629157..46801c28908e 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -686,6 +686,44 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
}
#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__))
+#define __alloc_obj3(ALLOC, P, COUNT, FLAGS) \
+({ \
+ size_t __obj_size = size_mul(sizeof(*P), COUNT); \
+ void *__obj_ptr; \
+ (P) = __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ __obj_size; \
+})
+
+#define __alloc_obj2(ALLOC, P, FLAGS) __alloc_obj3(ALLOC, P, 1, FLAGS)
+
+#define __alloc_obj4(ALLOC, P, FAM, COUNT, FLAGS) \
+({ \
+ size_t __obj_size = struct_size(P, FAM, COUNT); \
+ void *__obj_ptr; \
+ (P) = __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ __obj_size; \
+})
+
+#define kmalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kmalloc, __VA_ARGS__)
+
+#define kzalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kzalloc, __VA_ARGS__)
+
+#define kvmalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kvmalloc, __VA_ARGS__)
+
+#define kvzalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kvzalloc, __VA_ARGS__)
+
#define kmem_buckets_alloc(_b, _size, _flags) \
alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
--
2.34.1
Powered by blists - more mailing lists