[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240621113706.315500-14-iii@linux.ibm.com>
Date: Fri, 21 Jun 2024 13:34:57 +0200
From: Ilya Leoshkevich <iii@...ux.ibm.com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>,
Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, Marco Elver <elver@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Pekka Enberg <penberg@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Vasily Gorbik <gor@...ux.ibm.com>, Vlastimil Babka <vbabka@...e.cz>
Cc: Christian Borntraeger <borntraeger@...ux.ibm.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-s390@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
Mark Rutland <mark.rutland@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Sven Schnelle <svens@...ux.ibm.com>,
Ilya Leoshkevich <iii@...ux.ibm.com>
Subject: [PATCH v7 13/38] kmsan: Support SLAB_POISON
Avoid false KMSAN negatives with SLUB_DEBUG by allowing
kmsan_slab_free() to poison the freed memory, and by preventing
init_object() from unpoisoning new allocations by using __memset().
There are two alternatives to this approach. First, init_object()
can be marked with __no_sanitize_memory. This annotation should be used
with great care, because it drops all instrumentation from the
function, and any shadow writes will be lost. Even though this is not a
concern with the current init_object() implementation, this may change
in the future.
Second, kmsan_poison_memory() calls may be added after memset() calls.
The downside is that init_object() is called from
free_debug_processing(), in which case poisoning will erase the
distinction between simply uninitialized memory and UAF.
Reviewed-by: Alexander Potapenko <glider@...gle.com>
Signed-off-by: Ilya Leoshkevich <iii@...ux.ibm.com>
---
mm/kmsan/hooks.c | 2 +-
mm/slub.c | 15 +++++++++++----
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 267d0afa2e8b..26d86dfdc819 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -74,7 +74,7 @@ void kmsan_slab_free(struct kmem_cache *s, void *object)
return;
/* RCU slabs could be legally used after free within the RCU period */
- if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)))
+ if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU))
return;
/*
* If there's a constructor, freed memory must remain in the same state
diff --git a/mm/slub.c b/mm/slub.c
index 1373ac365a46..1134091abac5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1139,7 +1139,13 @@ static void init_object(struct kmem_cache *s, void *object, u8 val)
unsigned int poison_size = s->object_size;
if (s->flags & SLAB_RED_ZONE) {
- memset(p - s->red_left_pad, val, s->red_left_pad);
+ /*
+ * Here and below, avoid overwriting the KMSAN shadow. Keeping
+ * the shadow makes it possible to distinguish uninit-value
+ * from use-after-free.
+ */
+ memset_no_sanitize_memory(p - s->red_left_pad, val,
+ s->red_left_pad);
if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) {
/*
@@ -1152,12 +1158,13 @@ static void init_object(struct kmem_cache *s, void *object, u8 val)
}
if (s->flags & __OBJECT_POISON) {
- memset(p, POISON_FREE, poison_size - 1);
- p[poison_size - 1] = POISON_END;
+ memset_no_sanitize_memory(p, POISON_FREE, poison_size - 1);
+ memset_no_sanitize_memory(p + poison_size - 1, POISON_END, 1);
}
if (s->flags & SLAB_RED_ZONE)
- memset(p + poison_size, val, s->inuse - poison_size);
+ memset_no_sanitize_memory(p + poison_size, val,
+ s->inuse - poison_size);
}
static void restore_bytes(struct kmem_cache *s, char *message, u8 data,
--
2.45.1
Powered by blists - more mailing lists