[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <234bb23c-d295-76e5-a690-7ea68dc1118b@amd.com>
Date: Mon, 17 Jun 2024 12:50:27 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: linux-kernel@...r.kernel.org, x86@...nel.org, linux-coco@...ts.linux.dev,
svsm-devel@...onut-svsm.dev
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dan Williams <dan.j.williams@...el.com>, Michael Roth
<michael.roth@....com>, Ashish Kalra <ashish.kalra@....com>
Subject: Re: [PATCH v5 04/13] x86/sev: Perform PVALIDATE using the SVSM when
not at VMPL0
On 6/5/24 10:18, Tom Lendacky wrote:
> The PVALIDATE instruction can only be performed at VMPL0. An SVSM will
> be present when running at VMPL1 or a lower privilege level.
>
> When an SVSM is present, use the SVSM_CORE_PVALIDATE call to perform
> memory validation instead of issuing the PVALIDATE instruction directly.
>
> The validation of a single 4K page is now explicitly identified as such
> in the function name, pvalidate_4k_page(). The pvalidate_pages() function
> is used for validating 1 or more pages at either 4K or 2M in size. Each
> function, however, determines whether it can issue the PVALIDATE directly
> or whether the SVSM needs to be invoked.
>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> ---
> arch/x86/boot/compressed/sev.c | 45 +++++-
> arch/x86/include/asm/sev.h | 26 ++++
> arch/x86/kernel/sev-shared.c | 250 +++++++++++++++++++++++++++++++--
> arch/x86/kernel/sev.c | 30 ++--
> 4 files changed, 325 insertions(+), 26 deletions(-)
Small fix on top of this patch for SVSM PVALIDATE support.
Thanks,
Tom
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index b889be32ef9c..7933c1203b63 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -1360,15 +1360,12 @@ static u64 svsm_build_ca_from_pfn_range(u64 pfn, u64 pfn_end, bool action,
return pfn;
}
-static void svsm_build_ca_from_psc_desc(struct snp_psc_desc *desc,
- struct svsm_pvalidate_call *pc)
+static int svsm_build_ca_from_psc_desc(struct snp_psc_desc *desc, unsigned int desc_entry,
+ struct svsm_pvalidate_call *pc)
{
struct svsm_pvalidate_entry *pe;
- unsigned int desc_entry;
struct psc_entry *e;
- desc_entry = desc->hdr.cur_entry;
-
/* Nothing in the CA yet */
pc->num_entries = 0;
pc->cur_index = 0;
@@ -1391,7 +1388,7 @@ static void svsm_build_ca_from_psc_desc(struct snp_psc_desc *desc,
break;
}
- desc->hdr.cur_entry = desc_entry;
+ return desc_entry;
}
static void svsm_pval_pages(struct snp_psc_desc *desc)
@@ -1427,8 +1424,8 @@ static void svsm_pval_pages(struct snp_psc_desc *desc)
call.rax = SVSM_CORE_CALL(SVSM_CORE_PVALIDATE);
call.rcx = pc_pa;
- while (desc->hdr.cur_entry <= desc->hdr.end_entry) {
- svsm_build_ca_from_psc_desc(desc, pc);
+ for (i = 0; i <= desc->hdr.end_entry;) {
+ i = svsm_build_ca_from_psc_desc(desc, i, pc);
do {
ret = svsm_perform_call_protocol(&call);
>
Powered by blists - more mailing lists