lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aM2EvzLLmBi5-iQ5@google.com>
Date: Fri, 19 Sep 2025 09:28:47 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Hou Wenlong <houwenlong.hwl@...group.com>
Cc: kvm@...r.kernel.org, Lai Jiangshan <jiangshan.ljs@...group.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, 
	Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, 
	"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] KVM: x86: Add helper to retrieve cached value of user
 return MSR

On Thu, Sep 18, 2025, Hou Wenlong wrote:
> In the user return MSR support, the cached value is always the hardware
> value of the specific MSR. Therefore, add a helper to retrieve the
> cached value, which can replace the need for RDMSR, for example, to
> allow SEV-ES guests to restore the correct host hardware value without
> using RDMSR.
> 
> Signed-off-by: Hou Wenlong <houwenlong.hwl@...group.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/x86.c              | 8 ++++++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index cb86f3cca3e9..2cbb0f446a9b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2376,6 +2376,7 @@ int kvm_add_user_return_msr(u32 msr);
>  int kvm_find_user_return_msr(u32 msr);
>  int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
>  void kvm_user_return_msr_update_cache(unsigned int index, u64 val);
> +u64 kvm_get_user_return_msr_cache(unsigned int index);

s/index/slot (the existing helpers need to be changed).  The user_return APIs
deliberately use "slot" to try and make it more obvious that they take the slot
within the array, not the index of the MSR.

>  static inline bool kvm_is_supported_user_return_msr(u32 msr)
>  {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6d85fbafc679..88d26c86c3b2 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -675,6 +675,14 @@ void kvm_user_return_msr_update_cache(unsigned int slot, u64 value)
>  }
>  EXPORT_SYMBOL_GPL(kvm_user_return_msr_update_cache);
>  
> +u64 kvm_get_user_return_msr_cache(unsigned int slot)

I vote to drop "cache".  I don't love the existing kvm_user_return_msr_update_cache()
name (or implementation).  I would much rather that code be (I'll post a separate
patch) the below, to capture that the "cache" version is performing a subest of
the kvm_set_user_return_msr(). 

void __kvm_set_user_return_msr(unsigned int slot, u64 value)
{
	struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);

	msrs->values[slot].curr = value;
	kvm_user_return_register_notifier(msrs);
}
EXPORT_SYMBOL_GPL(__kvm_set_user_return_msr);

int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask)
{
	struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);
	int err;

	value = (value & mask) | (msrs->values[slot].host & ~mask);
	if (value == msrs->values[slot].curr)
		return 0;
	err = wrmsrq_safe(kvm_uret_msrs_list[slot], value);
	if (err)
		return 1;

	__kvm_set_user_return_msr(slot, value);
	return 0;
}
EXPORT_SYMBOL_GPL(kvm_set_user_return_msr);

> +{
> +	struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);
> +
> +	return msrs->values[slot].curr;

This can be a one-liner.  How about this?

---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/x86.c              | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 17772513b9cc..14236006266b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2376,6 +2376,7 @@ int kvm_add_user_return_msr(u32 msr);
 int kvm_find_user_return_msr(u32 msr);
 int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
 void kvm_user_return_msr_update_cache(unsigned int index, u64 val);
+u64 kvm_get_user_return_msr(unsigned int slot);
 
 static inline bool kvm_is_supported_user_return_msr(u32 msr)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e07936efacd4..801bf6172a21 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -675,6 +675,12 @@ void kvm_user_return_msr_update_cache(unsigned int slot, u64 value)
 }
 EXPORT_SYMBOL_GPL(kvm_user_return_msr_update_cache);
 
+u64 kvm_get_user_return_msr(unsigned int slot)
+{
+	return this_cpu_ptr(user_return_msrs)->values[slot].curr;
+}
+EXPORT_SYMBOL_GPL(kvm_get_user_return_msr);
+
 static void drop_user_return_notifiers(void)
 {
 	struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);

base-commit: c8fbf7ceb2ae3f64b0c377c8c21f6df577a13eb4
-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ