[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d486a008-8340-66b0-9667-11c8a50974e4@amd.com>
Date: Thu, 15 Jul 2021 12:05:54 -0500
From: Brijesh Singh <brijesh.singh@....com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: brijesh.singh@....com, x86@...nel.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-efi@...r.kernel.org, platform-driver-x86@...r.kernel.org,
linux-coco@...ts.linux.dev, linux-mm@...ck.org,
linux-crypto@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
Tom Lendacky <thomas.lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
Borislav Petkov <bp@...en8.de>,
Michael Roth <michael.roth@....com>,
Vlastimil Babka <vbabka@...e.cz>, tony.luck@...el.com,
npmccallum@...hat.com, brijesh.ksingh@...il.com
Subject: Re: [PATCH Part2 RFC v4 07/40] x86/sev: Split the physmap when adding
the page in RMP table
On 7/14/21 5:25 PM, Sean Christopherson wrote:
>> A write from the hypervisor goes through the RMP checks. When the
>> hypervisor writes to pages, hardware checks to ensures that the assigned
>> bit in the RMP is zero (i.e page is shared). If the page table entry that
>> gives the sPA indicates that the target page size is a large page, then
>> all RMP entries for the 4KB constituting pages of the target must have the
>> assigned bit 0. If one of entry does not have assigned bit 0 then hardware
>> will raise an RMP violation. To resolve it, split the page table entry
>> leading to target page into 4K.
>
> Isn't the above just saying:
>
> All RMP entries covered by a large page must match the shared vs. encrypted
> state of the page, e.g. host large pages must have assigned=0 for all relevant
> RMP entries.
>
Yes.
>> @@ -2375,6 +2375,12 @@ int rmpupdate(struct page *page, struct rmpupdate *val)
>> if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
>> return -ENXIO;
>>
>> + ret = set_memory_4k((unsigned long)page_to_virt(page), 1);
>
> IIUC, this shatters the direct map for page that's assigned to an SNP guest, and
> the large pages are never recovered?
>
> I believe a better approach would be to do something similar to memfd_secret[*],
> which encountered a similar problem with the direct map. Instead of forcing the
> direct map to be forever 4k, unmap the direct map when making a page guest private,
> and restore the direct map when it's made shared (or freed).
>
> I thought memfd_secret had also solved the problem of restoring large pages in
> the direct map, but at a glance I can't tell if that's actually implemented
> anywhere. But, even if it's not currently implemented, I think it makes sense
> to mimic the memfd_secret approach so that both features can benefit if large
> page preservation/restoration is ever added.
>
thanks for the memfd_secrets pointer. At the lowest level it shares the
same logic to split the physmap. We both end up calling to
change_page_attrs_set_clr() which split the page and updates the page
table attributes.
Given this, I believe in future if the change_page_attrs_set_clr() is
enhanced to track the splitting of the pages and restore it later then
it should work transparently.
thanks
Powered by blists - more mailing lists