[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PAXPR04MB84594764DBDBEFB124CA9994888C2@PAXPR04MB8459.eurprd04.prod.outlook.com>
Date: Mon, 19 Aug 2024 02:43:16 +0000
From: Peng Fan <peng.fan@....com>
To: Nicolas Bouchinet <nicolas.bouchinet@...p-os.org>, "linux-mm@...ck.org"
<linux-mm@...ck.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
CC: Chengming Zhou <chengming.zhou@...ux.dev>, Christoph Lameter
<cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes
<rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton
<akpm@...ux-foundation.org>, Vlastimil Babka <vbabka@...e.cz>, Roman Gushchin
<roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>
Subject: RE: [PATCH v3] slub: Fixes freepointer encoding for single free
Hi Nicolas,
> Subject: [PATCH v3] slub: Fixes freepointer encoding for single free
>
With slub_debug=FUZ init_on_free=1 loglevel=7, I see error
In 6.6 kernel. Should this patch be backported to 6.6?
I also had a hack together with your patch applied to 6.6
diff --git a/mm/slub.c b/mm/slub.c
index 96406f9813e8..ff8cdc737722 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1209,7 +1209,8 @@ static int check_object(struct kmem_cache *s, struct slab *slab,
if (s->object_size > orig_size &&
!check_bytes_and_report(s, slab, object,
"kmalloc Redzone", p + orig_size,
- val, s->object_size - orig_size)) {
+ slab_want_init_on_free(s) ? 0 : val,
+ s->object_size - orig_size)) {
return 0;
}
}
Thanks,
Peng.
> From: Nicolas Bouchinet <nicolas.bouchinet@....gouv.fr>
>
> Commit 284f17ac13fe ("mm/slub: handle bulk and single object
> freeing
> separately") splits single and bulk object freeing in two functions
> slab_free() and slab_free_bulk() which leads slab_free() to call
> slab_free_hook() directly instead of slab_free_freelist_hook().
>
> If `init_on_free` is set, slab_free_hook() zeroes the object.
> Afterward, if `slub_debug=F` and `CONFIG_SLAB_FREELIST_HARDENED`
> are set, the do_slab_free() slowpath executes freelist consistency
> checks and try to decode a zeroed freepointer which leads to a
> "Freepointer corrupt" detection in check_object().
>
> During bulk free, slab_free_freelist_hook() isn't affected as it always
> sets it objects freepointer using set_freepointer() to maintain its
> reconstructed freelist after `init_on_free`.
>
> For single free, object's freepointer thus needs to be avoided when
> stored outside the object if `init_on_free` is set. The freepointer left as
> is, check_object() may later detect an invalid pointer value due to
> objects overflow.
>
> To reproduce, set `slub_debug=FU init_on_free=1 log_level=7` on the
> command line of a kernel build with
> `CONFIG_SLAB_FREELIST_HARDENED=y`.
>
> dmesg sample log:
> [ 10.708715]
> ============================================================
> =================
> [ 10.710323] BUG kmalloc-rnd-05-32 (Tainted: G B T ):
> Freepointer corrupt
> [ 10.712695] -----------------------------------------------------------------------------
> [ 10.712695]
> [ 10.712695] Slab 0xffffd8bdc400d580 objects=32 used=4
> fp=0xffff9d9a80356f80
> flags=0x200000000000a00(workingset|slab|node=0|zone=2)
> [ 10.716698] Object 0xffff9d9a80356600 @offset=1536
> fp=0x7ee4f480ce0ecd7c
> [ 10.716698]
> [ 10.716698] Bytes b4 ffff9d9a803565f0: 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 ................
> [ 10.720703] Object ffff9d9a80356600: 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 ................
> [ 10.720703] Object ffff9d9a80356610: 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 ................
> [ 10.724696] Padding ffff9d9a8035666c: 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 ................
> [ 10.724696] Padding ffff9d9a8035667c: 00 00 00
> 00 ....
> [ 10.724696] FIX kmalloc-rnd-05-32: Object at 0xffff9d9a80356600
> not freed
>
> Co-developed-by: Chengming Zhou <chengming.zhou@...ux.dev>
> Signed-off-by: Nicolas Bouchinet <nicolas.bouchinet@....gouv.fr>
> ---
> Changes since v2:
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lore.kernel.org%2Fall%2FZjCxZfD1d36zfq-
> R%40archlinux%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C38af
> dff178a0422aeddc08dc690996f5%7C686ea1d3bc2b4c6fa92cd99c5c3
> 01635%7C0%7C0%7C638500737122575573%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLC
> JXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=Q4Vs38E%2F7HSIETGC2hYD
> 07gN6U8hkVZR764Yn4TFWlw%3D&reserved=0
>
> * Reword commit message in order to clarify the patch approach as
> suggested by Vlastimil Babka
>
> Changes since v1:
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> lore.kernel.org%2Fall%2FZij_fGjRS_rK-
> 65r%40archlinux%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7C38
> afdff178a0422aeddc08dc690996f5%7C686ea1d3bc2b4c6fa92cd99c5c
> 301635%7C0%7C0%7C638500737122594077%7CUnknown%7CTWFp
> bGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> CJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=xWgnIygCHwUPjnFENutIai
> cMTk0HQTne8hQfOqIzbA0%3D&reserved=0
>
> * Jump above out of object freepointer if init_on_free is set instead of
> initializing it with set_freepointer() as suggested by Vlastimil Babka.
>
> * Adapt maybe_wipe_obj_freeptr() to avoid wiping out of object on
> alloc freepointer as suggested by Chengming Zhou.
>
> * Reword commit message.
> ---
> mm/slub.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 3aa12b9b323d..173c340ec1d3 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2102,15 +2102,20 @@ bool slab_free_hook(struct kmem_cache
> *s, void *x, bool init)
> *
> * The initialization memset's clear the object and the
> metadata,
> * but don't touch the SLAB redzone.
> + *
> + * The object's freepointer is also avoided if stored outside the
> + * object.
> */
> if (unlikely(init)) {
> int rsize;
> + unsigned int inuse;
>
> + inuse = get_info_end(s);
> if (!kasan_has_integrated_init())
> memset(kasan_reset_tag(x), 0, s->object_size);
> rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad :
> 0;
> - memset((char *)kasan_reset_tag(x) + s->inuse, 0,
> - s->size - s->inuse - rsize);
> + memset((char *)kasan_reset_tag(x) + inuse, 0,
> + s->size - inuse - rsize);
> }
> /* KASAN might put x into memory quarantine, delaying its
> reuse. */
> return !kasan_slab_free(s, x, init);
> @@ -3789,7 +3794,7 @@ static void *__slab_alloc_node(struct
> kmem_cache *s, static __always_inline void
> maybe_wipe_obj_freeptr(struct kmem_cache *s,
> void *obj)
> {
> - if (unlikely(slab_want_init_on_free(s)) && obj)
> + if (unlikely(slab_want_init_on_free(s)) && obj &&
> +!freeptr_outside_object(s))
> memset((void *)((char *)kasan_reset_tag(obj) + s-
> >offset),
> 0, sizeof(void *));
> }
> --
> 2.44.0
>
Powered by blists - more mailing lists