[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221219150026.bltiyk72pmdc2ic3@amd.com>
Date: Mon, 19 Dec 2022 09:00:26 -0600
From: Michael Roth <michael.roth@....com>
To: Borislav Petkov <bp@...en8.de>
CC: Ashish Kalra <Ashish.Kalra@....com>, <x86@...nel.org>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>,
<linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
<linux-crypto@...r.kernel.org>, <tglx@...utronix.de>,
<mingo@...hat.com>, <jroedel@...e.de>, <thomas.lendacky@....com>,
<hpa@...or.com>, <ardb@...nel.org>, <pbonzini@...hat.com>,
<seanjc@...gle.com>, <vkuznets@...hat.com>, <jmattson@...gle.com>,
<luto@...nel.org>, <dave.hansen@...ux.intel.com>, <slp@...hat.com>,
<pgonda@...gle.com>, <peterz@...radead.org>,
<srinivas.pandruvada@...ux.intel.com>, <rientjes@...gle.com>,
<dovmurik@...ux.ibm.com>, <tobin@....com>, <vbabka@...e.cz>,
<kirill@...temov.name>, <ak@...ux.intel.com>,
<tony.luck@...el.com>, <marcorr@...gle.com>,
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
<alpergun@...gle.com>, <dgilbert@...hat.com>, <jarkko@...nel.org>
Subject: Re: [PATCH Part2 v6 07/49] x86/sev: Invalid pages from direct map
when adding it to RMP table
On Wed, Jul 27, 2022 at 07:01:34PM +0200, Borislav Petkov wrote:
> On Mon, Jun 20, 2022 at 11:03:07PM +0000, Ashish Kalra wrote:
>
> > Subject: x86/sev: Invalid pages from direct map when adding it to RMP table
>
> "...: Invalidate pages from the direct map when adding them to the RMP table"
>
> > +static int restore_direct_map(u64 pfn, int npages)
> > +{
> > + int i, ret = 0;
> > +
> > + for (i = 0; i < npages; i++) {
> > + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
>
> set_memory_p() ?
We implemented this approach for v7, but it causes a fairly significant
performance regression, particularly for the case for npages > 1 which
this change was meant to optimize.
I still need to dig in a big but I'm guessing it's related to flushing
behavior.
It would however be nice to have a set_direct_map_default_noflush()
variant that accepted a 'npages' argument, since it would be more
performant here and also would potentially allow for restoring the 2M
direct mapping in some cases. Will look into this more for v8.
-Mike
>
> > + if (ret)
> > + goto cleanup;
> > + }
> > +
> > +cleanup:
> > + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i);
>
> Warn for each pfn?!
>
> That'll flood dmesg mightily.
>
> > + return ret;
> > +}
> > +
> > +static int invalid_direct_map(unsigned long pfn, int npages)
> > +{
> > + int i, ret = 0;
> > +
> > + for (i = 0; i < npages; i++) {
> > + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i));
>
> As above, set_memory_np() doesn't work here instead of looping over each
> page?
>
> > @@ -2462,11 +2494,38 @@ static int rmpupdate(u64 pfn, struct rmpupdate *val)
> > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
> > return -ENXIO;
> >
> > + level = RMP_TO_X86_PG_LEVEL(val->pagesize);
> > + npages = page_level_size(level) / PAGE_SIZE;
> > +
> > + /*
> > + * If page is getting assigned in the RMP table then unmap it from the
> > + * direct map.
> > + */
> > + if (val->assigned) {
> > + if (invalid_direct_map(pfn, npages)) {
> > + pr_err("Failed to unmap pfn 0x%llx pages %d from direct_map\n",
>
> "Failed to unmap %d pages at pfn 0x... from the direct map\n"
>
> > + pfn, npages);
> > + return -EFAULT;
> > + }
> > + }
> > +
> > /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */
> > asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE"
> > : "=a"(ret)
> > : "a"(paddr), "c"((unsigned long)val)
> > : "memory", "cc");
> > +
> > + /*
> > + * Restore the direct map after the page is removed from the RMP table.
> > + */
> > + if (!ret && !val->assigned) {
> > + if (restore_direct_map(pfn, npages)) {
> > + pr_err("Failed to map pfn 0x%llx pages %d in direct_map\n",
>
> "Failed to map %d pages at pfn 0x... into the direct map\n"
>
> Thx.
>
> --
> Regards/Gruss,
> Boris.
>
> https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists