[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240719192744.work.264-kees@kernel.org>
Date: Fri, 19 Jul 2024 12:27:48 -0700
From: Kees Cook <kees@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Kees Cook <kees@...nel.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
"Gustavo A . R . Silva" <gustavoars@...nel.org>,
Bill Wendling <morbo@...gle.com>,
Justin Stitt <justinstitt@...gle.com>,
Jann Horn <jannh@...gle.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Marco Elver <elver@...gle.com>,
linux-mm@...ck.org,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
linux-kernel@...r.kernel.org,
llvm@...ts.linux.dev,
linux-hardening@...r.kernel.org
Subject: [PATCH] slab: Introduce kmalloc_obj() and family
Introduce type-aware kmalloc-family helpers to replace the common
idioms for single, array, and flexible object allocations:
ptr = kmalloc(sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc_array(count, sizeof(*ptr), gfp);
ptr = kcalloc(count, sizeof(*ptr), gfp);
ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
These become, respectively:
kmalloc_obj(p, gfp);
kzalloc_obj(p, count, gfp);
kmalloc_obj(p, count, gfp);
kzalloc_obj(p, count, gfp);
kmalloc_obj(p, flex_member, count, gfp);
These each return the size of the allocation, so that other common
idioms can be converted easily as well. For example:
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
becomes:
info->size = kmalloc_obj(ptr, flex_member, count, gfp);
Internal introspection of allocated type also becomes possible, allowing
for alignment-aware choices and future hardening work. For example,
adding __alignof(*ptr) as an argument to the internal allocators so that
appropriate/efficient alignment choices can be made, or being able to
correctly choose per-allocation offset randomization within a bucket
that does not break alignment requirements.
Additionally, once __builtin_set_counted_by() (or equivalent) is added
by GCC and Clang, it will be possible to automatically set the counted
member of a struct with a counted_by FAM, further eliminating open-coded
redundant initializations:
info->size = struct_size(ptr, flex_member, count);
ptr = kmalloc(info->size, gfp);
ptr->flex_count = count;
becomes (i.e. unchanged from earlier example):
info->size = kmalloc_obj(ptr, flex_member, count, gfp);
Replacing all existing simple code patterns via Coccinelle[1] shows what
could be replaced immediately (saving roughly 1,500 lines):
7040 files changed, 14128 insertions(+), 15557 deletions(-)
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci [1]
Signed-off-by: Kees Cook <kees@...nel.org>
---
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: Gustavo A. R. Silva <gustavoars@...nel.org>
Cc: Bill Wendling <morbo@...gle.com>
Cc: Justin Stitt <justinstitt@...gle.com>
Cc: Jann Horn <jannh@...gle.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Marco Elver <elver@...gle.com>
Cc: linux-mm@...ck.org
---
include/linux/slab.h | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 7247e217e21b..3817554f2d51 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -665,6 +665,44 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
}
#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__))
+#define __alloc_obj3(ALLOC, P, COUNT, FLAGS) \
+({ \
+ size_t __obj_size = size_mul(sizeof(*P), COUNT); \
+ void *__obj_ptr; \
+ (P) = __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ __obj_size; \
+})
+
+#define __alloc_obj2(ALLOC, P, FLAGS) __alloc_obj3(ALLOC, P, 1, FLAGS)
+
+#define __alloc_obj4(ALLOC, P, FAM, COUNT, FLAGS) \
+({ \
+ size_t __obj_size = struct_size(P, FAM, COUNT); \
+ void *__obj_ptr; \
+ (P) = __obj_ptr = ALLOC(__obj_size, FLAGS); \
+ if (!__obj_ptr) \
+ __obj_size = 0; \
+ __obj_size; \
+})
+
+#define kmalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kmalloc, __VA_ARGS__)
+
+#define kzalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kzalloc, __VA_ARGS__)
+
+#define kvmalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kvmalloc, __VA_ARGS__)
+
+#define kvzalloc_obj(...) \
+ CONCATENATE(__alloc_obj, \
+ COUNT_ARGS(__VA_ARGS__))(kvzalloc, __VA_ARGS__)
+
static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t size, gfp_t flags, int node)
{
if (__builtin_constant_p(size) && size) {
--
2.34.1
Powered by blists - more mailing lists