[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28d98245-e6fc-4f04-876e-366a353ee6ce@amd.com>
Date: Fri, 25 Oct 2024 04:25:50 -0500
From: "Kalra, Ashish" <ashish.kalra@....com>
To: Tom Lendacky <thomas.lendacky@....com>, linux-kernel@...r.kernel.org,
x86@...nel.org
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
Michael Roth <michael.roth@....com>, Nikunj A Dadhania <nikunj@....com>,
Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Subject: Re: [PATCH v4 2/8] x86/sev: Add support for the RMPREAD instruction
On 10/23/2024 1:41 PM, Tom Lendacky wrote:
> The RMPREAD instruction returns an architecture defined format of an
> RMP table entry. This is the preferred method for examining RMP entries.
>
> The instruction is advertised in CPUID 0x8000001f_EAX[21]. Use this
> instruction when available.
>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/virt/svm/sev.c | 11 +++++++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 913fd3a7bac6..89c1308cdf54 10064
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -448,6 +448,7 @@
> #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
> #define X86_FEATURE_SME_COHERENT (19*32+10) /* AMD hardware-enforced cache coherency */
> #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
> +#define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */
> #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */
>
> /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 4d095affdb4d..e197610b4eed 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpread *entry)
> {
> struct rmpentry *e;
>
> + if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> + int ret;
> +
> + asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> + : "=a" (ret)
> + : "a" (pfn << PAGE_SHIFT), "c" (entry)
> + : "memory", "cc");
> +
> + return ret;
> + }
> +
> e = __get_rmpentry(pfn);
> if (IS_ERR(e))
> return PTR_ERR(e);
Reviewed-by: Ashish Kalra <ashish.kalra@....com>
Thanks,
Ashish
Powered by blists - more mailing lists