[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0238d607-3fd7-4deb-92ac-c01aca2090fa@amd.com>
Date: Thu, 24 Apr 2025 14:27:44 -0500
From: "Kalra, Ashish" <ashish.kalra@....com>
To: Tom Lendacky <thomas.lendacky@....com>, tglx@...utronix.de,
mingo@...hat.com, dave.hansen@...ux.intel.com, x86@...nel.org, bp@...en8.de,
hpa@...or.com
Cc: michael.roth@....com, nikunj@....com, seanjc@...gle.com, ardb@...nel.org,
stable@...r.kernel.org, linux-kernel@...r.kernel.org,
kexec@...ts.infradead.org, linux-coco@...ts.linux.dev
Subject: Re: [PATCH] x86/sev: Fix making shared pages private during kdump
Hello Tom,
On 4/24/2025 10:29 AM, Tom Lendacky wrote:
> On 4/24/25 09:27, Ashish Kalra wrote:
>> From: Ashish Kalra <ashish.kalra@....com>
>>
>> When the shared pages are being made private during kdump preparation
>> there are additional checks to handle shared GHCB pages.
>>
>> These additional checks include handling the case of GHCB page being
>> contained within a 2MB page.
>>
>> There is a bug in this additional check for GHCB page contained
>> within a 2MB page which causes any shared page just below the
>> per-cpu GHCB getting skipped from being transitioned back to private
>> before kdump preparation which subsequently causes a 0x404 #VC
>> exception when this shared page is accessed later while dumping guest
>> memory during vmcore generation via kdump.
>>
>> Correct the detection and handling of GHCB pages contained within
>> a 2MB page.
>>
>> Cc: stable@...r.kernel.org
>> Fixes: 3074152e56c9 ("x86/sev: Convert shared memory back to private on kexec")
>> Signed-off-by: Ashish Kalra <ashish.kalra@....com>
>> ---
>> arch/x86/coco/sev/core.c | 11 ++++++++++-
>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
>> index 2c27d4b3985c..16d874f4dcd3 100644
>> --- a/arch/x86/coco/sev/core.c
>> +++ b/arch/x86/coco/sev/core.c
>> @@ -926,7 +926,13 @@ static void unshare_all_memory(void)
>> data = per_cpu(runtime_data, cpu);
>> ghcb = (unsigned long)&data->ghcb_page;
>>
>> - if (addr <= ghcb && ghcb <= addr + size) {
>> + /* Handle the case of 2MB page containing the GHCB page */
>
> s/2MB page/a huge page/
>
>> + if (level == PG_LEVEL_4K && addr == ghcb) {
>> + skipped_addr = true;
>> + break;
>> + }
>> + if (level > PG_LEVEL_4K && addr <= ghcb &&
>> + ghcb < addr + size) {
>> skipped_addr = true;
>> break;
>> }
>> @@ -1106,6 +1112,9 @@ void snp_kexec_finish(void)
>> ghcb = &data->ghcb_page;
>> pte = lookup_address((unsigned long)ghcb, &level);
>> size = page_level_size(level);
>> + /* Handle the case of 2MB page containing the GHCB page */
>> + if (level > PG_LEVEL_4K)
>> + ghcb = (struct ghcb *)((unsigned long)ghcb & PMD_MASK);
>
> For safety, shouldn't the mask be based on the level/size that is returned?
>
Yes that makes sense and i will fix it accordingly.
Thanks,
Ashish
> Thanks,
> Tom
>
>> set_pte_enc(pte, level, (void *)ghcb);
>> snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE));
>> }
Powered by blists - more mailing lists