[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+fCnZd+ANJ2w4R7ww7GTM=92UGGFKpaL1h56iRMN2Lr14QN5w@mail.gmail.com>
Date: Tue, 13 Jan 2026 02:21:22 +0100
From: Andrey Konovalov <andreyknvl@...il.com>
To: Maciej Wieczor-Retman <m.wieczorretman@...me>
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>,
Thomas Gleixner <tglx@...nel.org>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Andrew Morton <akpm@...ux-foundation.org>,
Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v8 13/14] x86/kasan: Logical bit shift for kasan_mem_to_shadow
On Mon, Jan 12, 2026 at 6:28 PM Maciej Wieczor-Retman
<m.wieczorretman@...me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>
> The tag-based KASAN adopts an arithemitc bit shift to convert a memory
> address to a shadow memory address. While it makes a lot of sense on
> arm64, it doesn't work well for all cases on x86 - either the
> non-canonical hook becomes quite complex for different paging levels, or
> the inline mode would need a lot more adjustments. Thus the best working
> scheme is the logical bit shift and non-canonical shadow offset that x86
> uses for generic KASAN, of course adjusted for the increased granularity
> from 8 to 16 bytes.
>
> Add an arch specific implementation of kasan_mem_to_shadow() that uses
> the logical bit shift.
>
> The non-canonical hook tries to calculate whether an address came from
> kasan_mem_to_shadow(). First it checks whether this address fits into
> the legal set of values possible to output from the mem to shadow
> function.
>
> Tie both generic and tag-based x86 KASAN modes to the address range
> check associated with generic KASAN.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
> ---
> Changelog v7:
> - Redo the patch message and add a comment to __kasan_mem_to_shadow() to
> provide better explanation on why x86 doesn't work well with the
> arithemitc bit shift approach (Marco).
>
> Changelog v4:
> - Add this patch to the series.
>
> arch/x86/include/asm/kasan.h | 15 +++++++++++++++
> mm/kasan/report.c | 5 +++--
> 2 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index eab12527ed7f..9b7951a79753 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -31,6 +31,21 @@
> #include <linux/bits.h>
>
> #ifdef CONFIG_KASAN_SW_TAGS
> +/*
> + * Using the non-arch specific implementation of __kasan_mem_to_shadow() with a
> + * arithmetic bit shift can cause high code complexity in KASAN's non-canonical
> + * hook for x86 or might not work for some paging level and KASAN mode
> + * combinations. The inline mode compiler support could also suffer from higher
> + * complexity for no specific benefit. Therefore the generic mode's logical
> + * shift implementation is used.
> + */
> +static inline void *__kasan_mem_to_shadow(const void *addr)
> +{
> + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> + + KASAN_SHADOW_OFFSET;
> +}
> +
> +#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr)
> #define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
> #define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
> #define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index b5beb1b10bd2..db6a9a3d01b2 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -642,13 +642,14 @@ void kasan_non_canonical_hook(unsigned long addr)
> const char *bug_type;
>
> /*
> - * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
> + * For Generic KASAN and Software Tag-Based mode on the x86
> + * architecture, kasan_mem_to_shadow() uses the logical right shift
> * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
> * both x86 and arm64). Thus, the possible shadow addresses (even for
> * bogus pointers) belong to a single contiguous region that is the
> * result of kasan_mem_to_shadow() applied to the whole address space.
> */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) {
Not a functionality but just a code organization related concern:
Here, we embed the CONFIG_X86_64 special case in the core KASAN code,
but the __kasan_mem_to_shadow definition to use the logical shift
exists in the x86-64 arch code, and it just copy-pastes one of the
cases from the core kasan_mem_to_shadow definition.
Should we just move the x86-64 special case to the core KASAN code too
then? I.e., change the kasan_mem_to_shadow definition in
include/linux/kasan.h to check for IS_ENABLED(CONFIG_X86_64)).
And we could also add a comment there explaining how using the logical
shift for SW_TAGS benefits some architectures (just arm64 for now, but
riscv in the future as well). And put your comment about why it's not
worth it for x86 there as well.
I don't have a strong preference, just an idea.
Any thoughts?
> if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
> addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
> return;
There's also a comment lower in the function that needs to be updated
to mention Software Tag-Based mode on arm64 specifically.
> --
> 2.52.0
>
>
Powered by blists - more mailing lists