[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YcxBzXc4+b+hrXJE@zn.tnic>
Date: Wed, 29 Dec 2021 12:09:01 +0100
From: Borislav Petkov <bp@...en8.de>
To: Brijesh Singh <brijesh.singh@....com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-efi@...r.kernel.org, platform-driver-x86@...r.kernel.org,
linux-coco@...ts.linux.dev, linux-mm@...ck.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
Tom Lendacky <thomas.lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
Michael Roth <michael.roth@....com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andi Kleen <ak@...ux.intel.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
tony.luck@...el.com, marcorr@...gle.com,
sathyanarayanan.kuppuswamy@...ux.intel.com
Subject: Re: [PATCH v8 15/40] x86/mm: Add support to validate memory when
changing C-bit
On Fri, Dec 10, 2021 at 09:43:07AM -0600, Brijesh Singh wrote:
> The set_memory_{encrypt,decrypt}() are used for changing the pages
$ git grep -E "set_memory_decrypt\W"
$
Please check all your commit messages whether you're quoting the proper
functions.
> from decrypted (shared) to encrypted (private) and vice versa.
> When SEV-SNP is active, the page state transition needs to go through
> additional steps.
... "done by the guest."
I think it is important to state here who's supposed to do those
additional steps.
...
> @@ -659,6 +659,161 @@ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op
> WARN(1, "invalid memory op %d\n", op);
> }
>
> +static int vmgexit_psc(struct snp_psc_desc *desc)
> +{
> + int cur_entry, end_entry, ret = 0;
> + struct snp_psc_desc *data;
> + struct ghcb_state state;
> + unsigned long flags;
> + struct ghcb *ghcb;
> +
> + /* __sev_get_ghcb() need to run with IRQs disabled because it using per-cpu GHCB */
"... because it uses a per-CPU GHCB."
> + local_irq_save(flags);
> +
> + ghcb = __sev_get_ghcb(&state);
> + if (unlikely(!ghcb))
> + panic("SEV-SNP: Failed to get GHCB\n");
__sev_get_ghcb() will already panic if even the backup GHCB is active so
you don't need to panic here too - just check the retval.
> + /* Copy the input desc into GHCB shared buffer */
> + data = (struct snp_psc_desc *)ghcb->shared_buffer;
> + memcpy(ghcb->shared_buffer, desc, min_t(int, GHCB_SHARED_BUF_SIZE, sizeof(*desc)));
> +
> + /*
> + * As per the GHCB specification, the hypervisor can resume the guest
> + * before processing all the entries. Check whether all the entries
> + * are processed. If not, then keep retrying.
> + *
> + * The stragtegy here is to wait for the hypervisor to change the page
+ * The stragtegy here is to wait for the hypervisor to change the page
Unknown word [stragtegy] in comment, suggestions:
['strategy', 'strategist']
> + * state in the RMP table before guest accesses the memory pages. If the
> + * page state change was not successful, then later memory access will result
> + * in a crash.
> + */
> + cur_entry = data->hdr.cur_entry;
> + end_entry = data->hdr.end_entry;
> +
> + while (data->hdr.cur_entry <= data->hdr.end_entry) {
> + ghcb_set_sw_scratch(ghcb, (u64)__pa(data));
> +
Add a comment here:
/* This will advance the shared buffer data points to. */
I had asked about it already but nada:
"So then you *absoulutely* want to use data->hdr everywhere and then also
write why in the comment above the check that data gets updated by the
HV call."
> + ret = sev_es_ghcb_hv_call(ghcb, true, NULL, SVM_VMGEXIT_PSC, 0, 0);
> +
> + /*
> + * Page State Change VMGEXIT can pass error code through
> + * exit_info_2.
> + */
> + if (WARN(ret || ghcb->save.sw_exit_info_2,
> + "SEV-SNP: PSC failed ret=%d exit_info_2=%llx\n",
> + ret, ghcb->save.sw_exit_info_2)) {
> + ret = 1;
> + goto out;
> + }
> +
> + /* Verify that reserved bit is not set */
> + if (WARN(data->hdr.reserved, "Reserved bit is set in the PSC header\n")) {
> + ret = 1;
> + goto out;
> + }
> +
> + /*
> + * Sanity check that entry processing is not going backward.
"... backwards."
> + * This will happen only if hypervisor is tricking us.
> + */
> + if (WARN(data->hdr.end_entry > end_entry || cur_entry > data->hdr.cur_entry,
> +"SEV-SNP: PSC processing going backward, end_entry %d (got %d) cur_entry %d (got %d)\n",
> + end_entry, data->hdr.end_entry, cur_entry, data->hdr.cur_entry)) {
> + ret = 1;
> + goto out;
> + }
> + }
> +
> +out:
> + __sev_put_ghcb(&state);
> + local_irq_restore(flags);
> +
> + return ret;
> +}
> +
> +static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr,
> + unsigned long vaddr_end, int op)
> +{
> + struct psc_hdr *hdr;
> + struct psc_entry *e;
> + unsigned long pfn;
> + int i;
> +
> + hdr = &data->hdr;
> + e = data->entries;
> +
> + memset(data, 0, sizeof(*data));
> + i = 0;
> +
> + while (vaddr < vaddr_end) {
> + if (is_vmalloc_addr((void *)vaddr))
> + pfn = vmalloc_to_pfn((void *)vaddr);
> + else
> + pfn = __pa(vaddr) >> PAGE_SHIFT;
> +
> + e->gfn = pfn;
> + e->operation = op;
> + hdr->end_entry = i;
/*
* Current SNP implementation doesn't keep track of the page size so use
* 4K for simplicity.
*/
> + e->pagesize = RMP_PG_SIZE_4K;
> +
> + vaddr = vaddr + PAGE_SIZE;
> + e++;
> + i++;
> + }
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists