[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <465bff34-c145-90f9-80d2-b9e69791022b@amd.com>
Date: Wed, 16 Oct 2024 16:16:34 +0530
From: "Nikunj A. Dadhania" <nikunj@....com>
To: Tom Lendacky <thomas.lendacky@....com>, linux-kernel@...r.kernel.org,
x86@...nel.org
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
Michael Roth <michael.roth@....com>, Ashish Kalra <ashish.kalra@....com>,
nikunj@....com
Subject: Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction
On 9/30/2024 8:52 PM, Tom Lendacky wrote:
> The RMPREAD instruction returns an architecture defined format of an
> RMP table entry. This is the preferred method for examining RMP entries.
>
> The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this
> instruction when available.
>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
Reviewed-by: Nikunj A Dadhania <nikunj@....com>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/virt/svm/sev.c | 11 +++++++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index dd4682857c12..93620a4c5b15 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -447,6 +447,7 @@
> #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
> #define X86_FEATURE_SME_COHERENT (19*32+10) /* AMD hardware-enforced cache coherency */
> #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
> +#define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */
> #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */
>
> /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 103a2dd6e81d..73d4f422829a 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
> {
> struct rmpentry_raw *e;
>
> + if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> + int ret;
> +
> + asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> + : "=a" (ret)
> + : "a" (pfn << PAGE_SHIFT), "c" (entry)
> + : "memory", "cc");
> +
> + return ret;
> + }
> +
> e = __get_rmpentry(pfn);
> if (IS_ERR(e))
> return PTR_ERR(e);
Powered by blists - more mailing lists