[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210415165711.GD6318@zn.tnic>
Date: Thu, 15 Apr 2021 18:57:11 +0200
From: Borislav Petkov <bp@...en8.de>
To: Brijesh Singh <brijesh.singh@....com>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
linux-crypto@...r.kernel.org, ak@...ux.intel.com,
herbert@...dor.apana.org.au, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
"H. Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Tom Lendacky <thomas.lendacky@....com>,
David Rientjes <rientjes@...gle.com>,
Sean Christopherson <seanjc@...gle.com>
Subject: Re: [RFC Part2 PATCH 02/30] x86/sev-snp: add RMP entry lookup helpers
On Wed, Mar 24, 2021 at 12:04:08PM -0500, Brijesh Singh wrote:
> The lookup_page_in_rmptable() can be used by the host to read the RMP
> entry for a given page. The RMP entry format is documented in PPR
> section 2.1.5.2.
I see
Table 15-36. Fields of an RMP Entry
in the APM.
Which PPR do you mean? Also, you know where to put those documents,
right?
> +/* RMP table entry format (PPR section 2.1.5.2) */
> +struct __packed rmpentry {
> + union {
> + struct {
> + uint64_t assigned:1;
> + uint64_t pagesize:1;
> + uint64_t immutable:1;
> + uint64_t rsvd1:9;
> + uint64_t gpa:39;
> + uint64_t asid:10;
> + uint64_t vmsa:1;
> + uint64_t validated:1;
> + uint64_t rsvd2:1;
> + } info;
> + uint64_t low;
> + };
> + uint64_t high;
> +};
> +
> +typedef struct rmpentry rmpentry_t;
Eww, a typedef. Why?
struct rmpentry is just fine.
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 39461b9cb34e..06394b6d56b2 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -34,6 +34,8 @@
>
> #include "mm_internal.h"
>
<--- Needs a comment here to explain the magic 0x4000 and the magic
shift by 8.
> +#define rmptable_page_offset(x) (0x4000 + (((unsigned long) x) >> 8))
> +
> /*
> * Since SME related variables are set early in the boot process they must
> * reside in the .data section so as not to be zeroed out when the .bss
> @@ -612,3 +614,33 @@ static int __init mem_encrypt_snp_init(void)
> * SEV-SNP must be enabled across all CPUs, so make the initialization as a late initcall.
> */
> late_initcall(mem_encrypt_snp_init);
> +
> +rmpentry_t *lookup_page_in_rmptable(struct page *page, int *level)
snp_lookup_page_in_rmptable()
> +{
> + unsigned long phys = page_to_pfn(page) << PAGE_SHIFT;
> + rmpentry_t *entry, *large_entry;
> + unsigned long vaddr;
> +
> + if (!static_branch_unlikely(&snp_enable_key))
> + return NULL;
> +
> + vaddr = rmptable_start + rmptable_page_offset(phys);
> + if (WARN_ON(vaddr > rmptable_end))
Do you really want to spew a warn on splat for each wrong vaddr? What
for?
> + return NULL;
> +
> + entry = (rmpentry_t *)vaddr;
> +
> + /*
> + * Check if this page is covered by the large RMP entry. This is needed to get
> + * the page level used in the RMP entry.
> + *
No need for a new line in the comment and no need for the "e.g." thing
either.
Also, s/the large RMP entry/a large RMP entry/g.
> + * e.g. if the page is covered by the large RMP entry then page size is set in the
> + * base RMP entry.
> + */
> + vaddr = rmptable_start + rmptable_page_offset(phys & PMD_MASK);
> + large_entry = (rmpentry_t *)vaddr;
> + *level = rmpentry_pagesize(large_entry);
> +
> + return entry;
> +}
> +EXPORT_SYMBOL_GPL(lookup_page_in_rmptable);
Exported for kvm?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists