[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4311dbc7-efb5-ab6e-046c-87e833119236@amd.com>
Date: Thu, 24 Apr 2025 10:29:26 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Ashish Kalra <Ashish.Kalra@....com>, tglx@...utronix.de,
mingo@...hat.com, dave.hansen@...ux.intel.com, x86@...nel.org, bp@...en8.de,
hpa@...or.com
Cc: michael.roth@....com, nikunj@....com, seanjc@...gle.com, ardb@...nel.org,
stable@...r.kernel.org, linux-kernel@...r.kernel.org,
kexec@...ts.infradead.org, linux-coco@...ts.linux.dev
Subject: Re: [PATCH] x86/sev: Fix making shared pages private during kdump
On 4/24/25 09:27, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@....com>
>
> When the shared pages are being made private during kdump preparation
> there are additional checks to handle shared GHCB pages.
>
> These additional checks include handling the case of GHCB page being
> contained within a 2MB page.
>
> There is a bug in this additional check for GHCB page contained
> within a 2MB page which causes any shared page just below the
> per-cpu GHCB getting skipped from being transitioned back to private
> before kdump preparation which subsequently causes a 0x404 #VC
> exception when this shared page is accessed later while dumping guest
> memory during vmcore generation via kdump.
>
> Correct the detection and handling of GHCB pages contained within
> a 2MB page.
>
> Cc: stable@...r.kernel.org
> Fixes: 3074152e56c9 ("x86/sev: Convert shared memory back to private on kexec")
> Signed-off-by: Ashish Kalra <ashish.kalra@....com>
> ---
> arch/x86/coco/sev/core.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
> index 2c27d4b3985c..16d874f4dcd3 100644
> --- a/arch/x86/coco/sev/core.c
> +++ b/arch/x86/coco/sev/core.c
> @@ -926,7 +926,13 @@ static void unshare_all_memory(void)
> data = per_cpu(runtime_data, cpu);
> ghcb = (unsigned long)&data->ghcb_page;
>
> - if (addr <= ghcb && ghcb <= addr + size) {
> + /* Handle the case of 2MB page containing the GHCB page */
s/2MB page/a huge page/
> + if (level == PG_LEVEL_4K && addr == ghcb) {
> + skipped_addr = true;
> + break;
> + }
> + if (level > PG_LEVEL_4K && addr <= ghcb &&
> + ghcb < addr + size) {
> skipped_addr = true;
> break;
> }
> @@ -1106,6 +1112,9 @@ void snp_kexec_finish(void)
> ghcb = &data->ghcb_page;
> pte = lookup_address((unsigned long)ghcb, &level);
> size = page_level_size(level);
> + /* Handle the case of 2MB page containing the GHCB page */
> + if (level > PG_LEVEL_4K)
> + ghcb = (struct ghcb *)((unsigned long)ghcb & PMD_MASK);
For safety, shouldn't the mask be based on the level/size that is returned?
Thanks,
Tom
> set_pte_enc(pte, level, (void *)ghcb);
> snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE));
> }
Powered by blists - more mailing lists