lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWpv9MgPHMYR27I0@wieczorr-mobl1.localdomain>
Date: Fri, 16 Jan 2026 17:09:17 +0000
From: Maciej Wieczor-Retman <m.wieczorretman@...me>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>, Thomas Gleixner <tglx@...nel.org>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>, Alexander Potapenko <glider@...gle.com>, linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>, kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH v8 09/14] x86/mm: LAM compatible non-canonical definition

On 2026-01-16 at 09:00:42 -0800, Sean Christopherson wrote:
>On Fri, Jan 16, 2026, Maciej Wieczor-Retman wrote:
>> On 2026-01-16 at 06:57:04 -0800, Sean Christopherson wrote:
>> >On Fri, Jan 16, 2026, Andrey Ryabinin wrote:
>> >> On 1/12/26 6:28 PM, Maciej Wieczor-Retman wrote:
>> >> > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
>> >> > index bcf5cad3da36..b7940fa49e64 100644
>> >> > --- a/arch/x86/include/asm/page.h
>> >> > +++ b/arch/x86/include/asm/page.h
>> >> > @@ -82,9 +82,22 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
>> >> >  	return __va(pfn << PAGE_SHIFT);
>> >> >  }
>> >> >
>> >> > +#ifdef CONFIG_KASAN_SW_TAGS
>> >> > +#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL((vaddr_bits) - 1))
>> >>
>> >> why is the choice of CANONICAL_MASK() gated at compile time? Shouldn’t this be a
>> >> runtime decision based on whether LAM is enabled or not on the running system?
>>
>> What would be appropriate for KVM? Instead of using #ifdefs checking
>> if(cpu_feature_enabled(X86_FEATURE_LAM))?
>
>None of the above.  Practically speaking, the kernel APIs simply can't automatically
>handle the checks, because they are dependent on guest virtual CPU state, _and_
>on the exact operation.  E.g. LAM doesn't apply to inputs to TLB invalidation
>instructions like INVVPID and INVPCID.
>
>By the time KVM invokes __is_canonical_address(), KVM has already done the necessary
>LAM unmasking.  E.g. having KVM pass in a flag saying it doesn't need LAM masking
>would be rather silly.

Oh good, then I'll leave this function alone and try to work it out differently.
Thanks!

>
>> >> > +#else
>> >> > +#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
>> >> > +#endif
>> >> > +
>> >> > +/*
>> >> > + * To make an address canonical either set or clear the bits defined by the
>> >> > + * CANONICAL_MASK(). Clear the bits for userspace addresses if the top address
>> >> > + * bit is a zero. Set the bits for kernel addresses if the top address bit is a
>> >> > + * one.
>> >> > + */
>> >> >  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>> >>
>> >> +Cc KVM
>> >
>> >Thanks!
>> >
>> >> This is used extensively in KVM code. As far as I can tell, it may be used to
>> >> determine whether a guest virtual address is canonical or not.
>> >
>> >Yep, KVM uses this both to check canonical addresses and to force a canonical
>> >address (Intel and AMD disagree on the MSR_IA32_SYSENTER_{EIP,ESP} semantics in
>> >64-bit mode) for guest addresses.  This change will break KVM badly if KASAN_SW_TAGS=y.
>>
>> Oh, thanks! That's good to know.
>>
>> >
>> >> case, the result should depend on whether LAM is enabled for the guest, not
>> >> the host (and certainly not a host's compile-time option).
>> >
>> >Ya, KVM could roll its own versions, but IMO these super low level helpers should
>> >do exactly what they say.  E.g. at a glance, I'm not sure pt_event_addr_filters_sync()
>> >should be subjected to KASAN_SW_TAGS either.  If that's true, then AFAICT the
>> >_only_ code that wants the LAM-aware behavior is copy_from_kernel_nofault_allowed(),
>> >so maybe just handle it there?  Not sure that's a great long-term maintenance
>> >story either though.
>>
>> Yes, longterm it's probably best to just get it right in here.
>
>As above, I don't think that's feasible, because the context of both the current
>(virtual) CPU and the usage matters.  In other words, making __canonical_address()
>itself LAM-aware feels wrong.
>
>Actually, the kernel already has to deal with masking LAM bits for userspace
>addresses, and this series needs to unmask kernel address in other flows that
>effectively consume virtual addresses in software, so why not just do something
>similar for copy_from_kernel_nofault_allowed()?
>
>diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
>index 42115ac079cf..0b3c96f8902a 100644
>--- a/arch/x86/mm/maccess.c
>+++ b/arch/x86/mm/maccess.c
>@@ -33,7 +33,8 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
>        if (!boot_cpu_data.x86_virt_bits)
>                return true;
>
>-       return __is_canonical_address(vaddr, boot_cpu_data.x86_virt_bits);
>+       return __is_canonical_address(__tag_reset(vaddr),
>+                                     boot_cpu_data.x86_virt_bits);
> }
> #else
> bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)

Thanks! I'll try that :)

-- 
Kind regards
Maciej Wieczór-Retman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ