[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+fCnZfRTyNbRcU9jNB2O2EeXuoT0T2dY9atFyXy5P0jT1-QWw@mail.gmail.com>
Date: Fri, 5 Dec 2025 02:09:02 +0100
From: Andrey Konovalov <andreyknvl@...il.com>
To: Maciej Wieczor-Retman <m.wieczorretman@...me>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>,
Andrew Morton <akpm@...ux-foundation.org>, Uladzislau Rezki <urezki@...il.com>,
Marco Elver <elver@...gle.com>, jiayuan.chen@...ux.dev, stable@...r.kernel.org,
Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 2/3] kasan: Refactor pcpu kasan vmalloc unpoison
On Thu, Dec 4, 2025 at 8:00 PM Maciej Wieczor-Retman
<m.wieczorretman@...me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>
> A KASAN tag mismatch, possibly causing a kernel panic, can be observed
> on systems with a tag-based KASAN enabled and with multiple NUMA nodes.
> It was reported on arm64 and reproduced on x86. It can be explained in
> the following points:
>
> 1. There can be more than one virtual memory chunk.
> 2. Chunk's base address has a tag.
> 3. The base address points at the first chunk and thus inherits
> the tag of the first chunk.
> 4. The subsequent chunks will be accessed with the tag from the
> first chunk.
> 5. Thus, the subsequent chunks need to have their tag set to
> match that of the first chunk.
>
> Refactor code by reusing __kasan_unpoison_vmalloc in a new helper in
> preparation for the actual fix.
>
> Changelog v1 (after splitting of from the KASAN series):
> - Rewrite first paragraph of the patch message to point at the user
> impact of the issue.
> - Move helper to common.c so it can be compiled in all KASAN modes.
Nit: Can put this part after ---.
>
> Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
> Cc: <stable@...r.kernel.org> # 6.1+
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
> ---
> Changelog v3:
> - Redo the patch after applying Andrey's comments to align the code more
> with what's already in include/linux/kasan.h
>
> Changelog v2:
> - Redo the whole patch so it's an actual refactor.
>
> include/linux/kasan.h | 15 +++++++++++++++
> mm/kasan/common.c | 17 +++++++++++++++++
> mm/vmalloc.c | 4 +---
> 3 files changed, 33 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 6d7972bb390c..cde493cb7702 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -615,6 +615,16 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
> __kasan_poison_vmalloc(start, size);
> }
>
> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> + kasan_vmalloc_flags_t flags);
> +static __always_inline void
> +kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> + kasan_vmalloc_flags_t flags)
> +{
> + if (kasan_enabled())
> + __kasan_unpoison_vmap_areas(vms, nr_vms, flags);
> +}
> +
> #else /* CONFIG_KASAN_VMALLOC */
>
> static inline void kasan_populate_early_vm_area_shadow(void *start,
> @@ -639,6 +649,11 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
> static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
> { }
>
> +static __always_inline void
> +kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> + kasan_vmalloc_flags_t flags)
> +{ }
> +
> #endif /* CONFIG_KASAN_VMALLOC */
>
> #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index d4c14359feaf..1ed6289d471a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -28,6 +28,7 @@
> #include <linux/string.h>
> #include <linux/types.h>
> #include <linux/bug.h>
> +#include <linux/vmalloc.h>
>
> #include "kasan.h"
> #include "../slab.h"
> @@ -582,3 +583,19 @@ bool __kasan_check_byte(const void *address, unsigned long ip)
> }
> return true;
> }
> +
> +#ifdef CONFIG_KASAN_VMALLOC
> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> + kasan_vmalloc_flags_t flags)
> +{
> + unsigned long size;
> + void *addr;
> + int area;
> +
> + for (area = 0 ; area < nr_vms ; area++) {
> + size = vms[area]->size;
> + addr = vms[area]->addr;
> + vms[area]->addr = __kasan_unpoison_vmalloc(addr, size, flags);
> + }
> +}
> +#endif
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 22a73a087135..33e705ccafba 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4872,9 +4872,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
> * With hardware tag-based KASAN, marking is skipped for
> * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
> */
> - for (area = 0; area < nr_vms; area++)
> - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
> - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
> + kasan_unpoison_vmap_areas(vms, nr_vms, KASAN_VMALLOC_PROT_NORMAL);
>
> kfree(vas);
> return vms;
> --
> 2.52.0
>
Reviewed-by: Andrey Konovalov <andreyknvl@...il.com>
Powered by blists - more mailing lists