[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+fCnZeVEDwojqUfT1CC10sLZiY8MVN-7S7R6FP_OHkU3TH+0g@mail.gmail.com>
Date: Tue, 13 Jan 2026 02:21:47 +0100
From: Andrey Konovalov <andreyknvl@...il.com>
To: Maciej Wieczor-Retman <m.wieczorretman@...me>
Cc: Thomas Gleixner <tglx@...nel.org>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Jonathan Corbet <corbet@....net>, Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>, Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, Andrew Morton <akpm@...ux-foundation.org>,
Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v8 14/14] x86/kasan: Make software tag-based kasan available
On Mon, Jan 12, 2026 at 6:28 PM Maciej Wieczor-Retman
<m.wieczorretman@...me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>
> Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have
> ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore
> (TBI) that allows the software tag-based mode on arm64 platform.
>
> The value for sw_tags KASAN_SHADOW_OFFSET was calculated by rearranging
> the formulas for KASAN_SHADOW_START and KASAN_SHADOW_END from
> arch/x86/include/asm/kasan.h - the only prerequisites being
> KASAN_SHADOW_SCALE_SHIFT of 4, and KASAN_SHADOW_END equal to the
> one from KASAN generic mode.
>
> Set scale macro based on KASAN mode: in software tag-based mode 16 bytes
> of memory map to one shadow byte and 8 in generic mode.
>
> Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when
> CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler
> support is available.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
> ---
> Changelog v7:
> - Add a paragraph to the patch message explaining how the various
> addresses and the KASAN_SHADOW_OFFSET were calculated.
>
> Changelog v6:
> - Don't enable KASAN if LAM is not supported.
> - Move kasan_init_tags() to kasan_init_64.c to not clutter the setup.c
> file.
> - Move the #ifdef for the KASAN scale shift here.
> - Move the gdb code to patch "Use arithmetic shift for shadow
> computation".
> - Return "depends on KASAN" line to Kconfig.
> - Add the defer kasan config option so KASAN can be disabled on hardware
> that doesn't have LAM.
>
> Changelog v4:
> - Add x86 specific kasan_mem_to_shadow().
> - Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to
> KASAN_SHADOW_START/END.
> - Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset.
> - Disable inline and stack support when software tags are enabled on
> x86.
>
> Changelog v3:
> - Remove runtime_const from previous patch and merge the rest here.
> - Move scale shift definition back to header file.
> - Add new kasan offset for software tag based mode.
> - Fix patch message typo 32 -> 16, and 16 -> 8.
> - Update lib/Kconfig.kasan with x86 now having software tag-based
> support.
>
> Changelog v2:
> - Remove KASAN dense code.
>
> Documentation/arch/x86/x86_64/mm.rst | 6 ++++--
> arch/x86/Kconfig | 4 ++++
> arch/x86/boot/compressed/misc.h | 1 +
> arch/x86/include/asm/kasan.h | 5 +++++
> arch/x86/mm/kasan_init_64.c | 6 ++++++
> lib/Kconfig.kasan | 3 ++-
> 6 files changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
> index a6cf05d51bd8..ccbdbb4cda36 100644
> --- a/Documentation/arch/x86/x86_64/mm.rst
> +++ b/Documentation/arch/x86/x86_64/mm.rst
> @@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
> ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole
> ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory map (vmemmap_base)
> ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
> - ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
> + ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
> + fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
> __________________|____________|__________________|_________|____________________________________________________________
> |
> | Identical layout to the 56-bit one from here on:
> @@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
> ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole
> ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory map (vmemmap_base)
> ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole
> - ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory
> + ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
> + ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
> __________________|____________|__________________|_________|____________________________________________________________
> |
> | Identical layout to the 47-bit one from here on:
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 80527299f859..21c71d9e0698 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -67,6 +67,7 @@ config X86
> select ARCH_CLOCKSOURCE_INIT
> select ARCH_CONFIGURES_CPU_MITIGATIONS
> select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
> + select ARCH_DISABLE_KASAN_INLINE if X86_64 && KASAN_SW_TAGS
> select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
> select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
> select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
> @@ -196,6 +197,8 @@ config X86
> select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if X86_64
> select HAVE_ARCH_KASAN_VMALLOC if X86_64
> + select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
> + select ARCH_NEEDS_DEFER_KASAN if ADDRESS_MASKING
Do we need this?
> select HAVE_ARCH_KFENCE
> select HAVE_ARCH_KMSAN if X86_64
> select HAVE_ARCH_KGDB
> @@ -410,6 +413,7 @@ config AUDIT_ARCH
> config KASAN_SHADOW_OFFSET
> hex
> depends on KASAN
> + default 0xeffffc0000000000 if KASAN_SW_TAGS
> default 0xdffffc0000000000
>
> config HAVE_INTEL_TXT
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index fd855e32c9b9..ba70036c2abd 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -13,6 +13,7 @@
> #undef CONFIG_PARAVIRT_SPINLOCKS
> #undef CONFIG_KASAN
> #undef CONFIG_KASAN_GENERIC
> +#undef CONFIG_KASAN_SW_TAGS
>
> #define __NO_FORTIFY
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 9b7951a79753..b38a1a83af96 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -6,7 +6,12 @@
> #include <linux/kasan-tags.h>
> #include <linux/types.h>
> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> +
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define KASAN_SHADOW_SCALE_SHIFT 4
> +#else
> #define KASAN_SHADOW_SCALE_SHIFT 3
> +#endif
>
> /*
> * Compiler uses shadow offset assuming that addresses start
> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index 7f5c11328ec1..3a5577341805 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -465,4 +465,10 @@ void __init kasan_init(void)
>
> init_task.kasan_depth = 0;
> kasan_init_generic();
> + pr_info("KernelAddressSanitizer initialized\n");
This pr_info is not needed, kasan_init_generic already prints the message.
> +
> + if (boot_cpu_has(X86_FEATURE_LAM))
> + kasan_init_sw_tags();
> + else
> + pr_info("KernelAddressSanitizer not initialized (sw-tags): hardware doesn't support LAM\n");
> }
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index a4bb610a7a6f..d13ea8da7bfd 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -112,7 +112,8 @@ config KASAN_SW_TAGS
>
> Requires GCC 11+ or Clang.
>
> - Supported only on arm64 CPUs and relies on Top Byte Ignore.
> + Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs
> + that support Linear Address Masking.
>
> Consumes about 1/16th of available memory at kernel start and
> add an overhead of ~20% for dynamic allocations.
> --
> 2.52.0
>
>
Powered by blists - more mailing lists