lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 Aug 2023 14:41:33 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Binbin Wu <binbin.wu@...ux.intel.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        pbonzini@...hat.com, chao.gao@...el.com, kai.huang@...el.com,
        David.Laight@...lab.com, robert.hu@...ux.intel.com,
        guang.zeng@...el.com
Subject: Re: [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP

This patch doesn't virtualize LAM_SUP, it simply allows the guest to enable
CR4.LAM_SUP (ignoring that that's not possible at this point because KVM will
reject CR4 values that *KVM* doesn't support).

Actually virtualizing LAM_SUP requires the bits from "KVM: VMX: Implement and wire
get_untagged_addr() for LAM".  You can still separate LAM_SUP from LAM_U*, but
these patches should come *after* the get_untagged_addr() hook is added.  The
first of LAM_SUP vs. LAM_U* can then implement vmx_get_untagged_addr(), and simply
return the raw gva in the "other" case.  E.g. if you add LAM_SUP first, the code
can be:

	if (!(gva & BIT_ULL(63))) {
		/* KVM doesn't yet virtualize LAM_U{48,57}. */
		return false;
	} else {
		if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_LAM_SUP))
			return gva;

		lam_bit = kvm_is_cr4_bit_set(vcpu, X86_CR4_LA57) ? 56 : 47;
	}

On Wed, Jul 19, 2023, Binbin Wu wrote:
> From: Robert Hoo <robert.hu@...ux.intel.com>
> 
> Add support to allow guests to set the new CR4 control bit for guests to enable
> the new Intel CPU feature Linear Address Masking (LAM) on supervisor pointers.
> 
> LAM modifies the checking that is applied to 64-bit linear addresses, allowing
> software to use of the untranslated address bits for metadata and masks the
> metadata bits before using them as linear addresses to access memory. LAM uses
> CR4.LAM_SUP (bit 28) to configure LAM for supervisor pointers. LAM also changes
> VMENTER to allow the bit to be set in VMCS's HOST_CR4 and GUEST_CR4 for
> virtualization. Note CR4.LAM_SUP is allowed to be set even not in 64-bit mode,
> but it will not take effect since LAM only applies to 64-bit linear addresses.
> 
> Move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vcpu
> supporting LAM feature or not. Leave the bit intercepted to prevent guest from
> setting CR4.LAM_SUP bit if LAM is not exposed to guest as well as to avoid vmread
> every time when KVM fetches its value, with the expectation that guest won't
> toggle the bit frequently.
> 
> Set CR4.LAM_SUP bit in the emulated IA32_VMX_CR4_FIXED1 MSR for guests to allow
> guests to enable LAM for supervisor pointers in nested VMX operation.
> 
> Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, KVM doesn't
> need to emulate TLB flush based on it.
> There's no other features/vmx_exec_controls connection, no other code needed in
> {kvm,vmx}_set_cr4().
> 
> Signed-off-by: Robert Hoo <robert.hu@...ux.intel.com>
> Co-developed-by: Binbin Wu <binbin.wu@...ux.intel.com>
> Signed-off-by: Binbin Wu <binbin.wu@...ux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@...el.com>
> Reviewed-by: Kai Huang <kai.huang@...el.com>
> Tested-by: Xuelian Guo <xuelian.guo@...el.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 3 ++-
>  arch/x86/kvm/vmx/vmx.c          | 3 +++
>  arch/x86/kvm/x86.h              | 2 ++
>  3 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e8e1101a90c8..881a0be862e1 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -125,7 +125,8 @@
>  			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>  			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>  			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> -			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
> +			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
> +			  | X86_CR4_LAM_SUP))
>  
>  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ae47303c88d7..a0d6ea87a2d0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7646,6 +7646,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
>  	cr4_fixed1_update(X86_CR4_UMIP,       ecx, feature_bit(UMIP));
>  	cr4_fixed1_update(X86_CR4_LA57,       ecx, feature_bit(LA57));
>  
> +	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
> +	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
> +
>  #undef cr4_fixed1_update
>  }
>  
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 82e3dafc5453..24e2b56356b8 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -528,6 +528,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
>  		__reserved_bits |= X86_CR4_VMXE;        \
>  	if (!__cpu_has(__c, X86_FEATURE_PCID))          \
>  		__reserved_bits |= X86_CR4_PCIDE;       \
> +	if (!__cpu_has(__c, X86_FEATURE_LAM))           \
> +		__reserved_bits |= X86_CR4_LAM_SUP;     \
>  	__reserved_bits;                                \
>  })
>  
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ