[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b277f0e9-f2ba-011d-3078-fa4a1222435d@amd.com>
Date: Tue, 23 Aug 2022 09:28:01 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Dionna Amalie Glaze <dionnaglaze@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
"H. Peter Anvin" <hpa@...or.com>,
Michael Roth <michael.roth@....com>,
Joerg Roedel <jroedel@...e.de>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH v1 2/2] x86/sev: Add SNP-specific unaccepted memory
support
On 8/22/22 19:24, Dionna Amalie Glaze wrote:
>>
>> +void snp_accept_memory(phys_addr_t start, phys_addr_t end)
>> +{
>> + unsigned long vaddr;
>> + unsigned int npages;
>> +
>> + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
>> + return;
>> +
>> + vaddr = (unsigned long)__va(start);
>> + npages = (end - start) >> PAGE_SHIFT;
>> +
>> + set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
>> +
>> + pvalidate_pages(vaddr, npages, true);
>> +}
>
> My testing of this patch shows that a significant amount of time is
> spent using the MSR protocol to change page state, in such a
> significant fashion that it's slower than eagerly accepting all
> memory. The difference gets worse as the RAM size goes up, so I think
> there's some phase problem with the GHCB protocol not getting used
> early enough?
Thank you for testing. Let me see what I can find. I might have to rework
Brijesh's original patches more to make use of the early boot GHCB in
order to cut down on the number of MSR protocol requests.
Thanks,
Tom
>
Powered by blists - more mailing lists