[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV7whSufeIqslzzN@feng-clx>
Date: Thu, 23 Nov 2023 14:26:13 +0800
From: Feng Tang <feng.tang@...el.com>
To: "andrey.konovalov@...ux.dev" <andrey.konovalov@...ux.dev>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Andrey Konovalov <andreyknvl@...il.com>,
Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
"kasan-dev@...glegroups.com" <kasan-dev@...glegroups.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: Re: [PATCH mm] slub, kasan: improve interaction of KASAN and
slub_debug poisoning
Hi Andrey,
On Thu, Nov 23, 2023 at 07:12:02AM +0800, andrey.konovalov@...ux.dev wrote:
> From: Andrey Konovalov <andreyknvl@...gle.com>
>
> When both KASAN and slub_debug are enabled, when a free object is being
> prepared in setup_object, slub_debug poisons the object data before KASAN
> initializes its per-object metadata.
>
> Right now, in setup_object, KASAN only initializes the alloc metadata,
> which is always stored outside of the object. slub_debug is aware of
> this and it skips poisoning and checking that memory area.
>
> However, with the following patch in this series, KASAN also starts
> initializing its free medata in setup_object. As this metadata might be
> stored within the object, this initialization might overwrite the
> slub_debug poisoning. This leads to slub_debug reports.
>
> Thus, skip checking slub_debug poisoning of the object data area that
> overlaps with the in-object KASAN free metadata.
>
> Also make slub_debug poisoning of tail kmalloc redzones more precise when
> KASAN is enabled: slub_debug can still poison and check the tail kmalloc
> allocation area that comes after the KASAN free metadata.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
>
> ---
>
> Andrew, please put this patch right before "kasan: use stack_depot_put
> for Generic mode".
> ---
> mm/slub.c | 41 ++++++++++++++++++++++++++---------------
> 1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 63d281dfacdb..782bd8a6bd34 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -870,20 +870,20 @@ static inline void set_orig_size(struct kmem_cache *s,
> void *object, unsigned int orig_size)
> {
> void *p = kasan_reset_tag(object);
> + unsigned int kasan_meta_size;
>
> if (!slub_debug_orig_size(s))
> return;
>
> -#ifdef CONFIG_KASAN_GENERIC
> /*
> - * KASAN could save its free meta data in object's data area at
> - * offset 0, if the size is larger than 'orig_size', it will
> - * overlap the data redzone in [orig_size+1, object_size], and
> - * the check should be skipped.
> + * KASAN can save its free meta data inside of the object at offset 0.
> + * If this meta data size is larger than 'orig_size', it will overlap
> + * the data redzone in [orig_size+1, object_size]. Thus, we adjust
> + * 'orig_size' to be as at least as big as KASAN's meta data.
> */
> - if (kasan_metadata_size(s, true) > orig_size)
> - orig_size = s->object_size;
> -#endif
> + kasan_meta_size = kasan_metadata_size(s, true);
> + if (kasan_meta_size > orig_size)
> + orig_size = kasan_meta_size;
'orig_size' is to save the orignal request size for kmalloc object,
and its main purpose is to detect the memory wastage of kmalloc
objects, see commit 6edf2576a6cc "mm/slub: enable debugging memory
wasting of kmalloc"
Setting "orig_size = s->object_size" was to skip the wastage check
and the redzone sanity check for this 'wasted space'.
So it's better not to set 'kasan_meta_size' to orig_size.
And from the below code, IIUC, the orig_size is not used in fixing
the boot problem found by Hyeonggon?
Thanks,
Feng
>
> p += get_info_end(s);
> p += sizeof(struct track) * 2;
> @@ -1192,7 +1192,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
> {
> u8 *p = object;
> u8 *endobject = object + s->object_size;
> - unsigned int orig_size;
> + unsigned int orig_size, kasan_meta_size;
>
> if (s->flags & SLAB_RED_ZONE) {
> if (!check_bytes_and_report(s, slab, object, "Left Redzone",
> @@ -1222,12 +1222,23 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
> }
>
> if (s->flags & SLAB_POISON) {
> - if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) &&
> - (!check_bytes_and_report(s, slab, p, "Poison", p,
> - POISON_FREE, s->object_size - 1) ||
> - !check_bytes_and_report(s, slab, p, "End Poison",
> - p + s->object_size - 1, POISON_END, 1)))
> - return 0;
> + if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON)) {
> + /*
> + * KASAN can save its free meta data inside of the
> + * object at offset 0. Thus, skip checking the part of
> + * the redzone that overlaps with the meta data.
> + */
> + kasan_meta_size = kasan_metadata_size(s, true);
> + if (kasan_meta_size < s->object_size - 1 &&
> + !check_bytes_and_report(s, slab, p, "Poison",
> + p + kasan_meta_size, POISON_FREE,
> + s->object_size - kasan_meta_size - 1))
> + return 0;
> + if (kasan_meta_size < s->object_size &&
> + !check_bytes_and_report(s, slab, p, "End Poison",
> + p + s->object_size - 1, POISON_END, 1))
> + return 0;
> + }
> /*
> * check_pad_bytes cleans up on its own.
> */
> --
> 2.25.1
>
Powered by blists - more mailing lists