[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2968b97c-5d71-4c05-9013-f275bdbd9cd5@gmail.com>
Date: Fri, 16 Jan 2026 15:25:46 +0100
From: Andrey Ryabinin <ryabinin.a.a@...il.com>
To: Maciej Wieczor-Retman <m.wieczorretman@...me>,
Thomas Gleixner <tglx@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Cc: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>,
Alexander Potapenko <glider@...gle.com>, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson
<seanjc@...gle.com>, kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH v8 09/14] x86/mm: LAM compatible non-canonical definition
On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
>
> For an address to be canonical it has to have its top bits equal to each
> other. The number of bits depends on the paging level and whether
> they're supposed to be ones or zeroes depends on whether the address
> points to kernel or user space.
>
> With Linear Address Masking (LAM) enabled, the definition of linear
> address canonicality is modified. Not all of the previously required
> bits need to be equal, only the first and last from the previously equal
> bitmask. So for example a 5-level paging kernel address needs to have
> bits [63] and [56] set.
>
> Change the canonical checking function to use bit masks instead of bit
> shifts.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
> Acked-by: Alexander Potapenko <glider@...gle.com>
> ---
> Changelog v7:
> - Add Alexander's acked-by tag.
> - Add parentheses around vaddr_bits as suggested by checkpatch.
> - Apply the bitmasks to the __canonical_address() function which is used
> in kvm code.
>
> Changelog v6:
> - Use bitmasks to check both kernel and userspace addresses in the
> __is_canonical_address() (Dave Hansen and Samuel Holland).
>
> Changelog v4:
> - Add patch to the series.
>
> arch/x86/include/asm/page.h | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> index bcf5cad3da36..b7940fa49e64 100644
> --- a/arch/x86/include/asm/page.h
> +++ b/arch/x86/include/asm/page.h
> @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
> return __va(pfn << PAGE_SHIFT);
> }
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
runtime decision based on whether LAM is enabled or not on the running system?
> +#else
> +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
> +#endif
> +
> +/*
> + * To make an address canonical either set or clear the bits defined by the
> + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
> + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
> + * one.
> + */
> static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+Cc KVM
This is used extensively in KVM code. As far as I can tell, it may be used to determine
whether a guest virtual address is canonical or not. If that’s the case, the result should
depend on whether LAM is enabled for the guest, not the host (and certainly not a host's compile-time option).
> {
> - return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
> + return (vaddr & BIT_ULL(63)) ? vaddr | CANONICAL_MASK(vaddr_bits) :
> + vaddr & ~CANONICAL_MASK(vaddr_bits);
> }
>
> static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
Powered by blists - more mailing lists