[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <174715966675.406.16246033000717371214.tip-bot2@tip-bot2>
Date: Tue, 13 May 2025 18:07:46 -0000
From: "tip-bot2 for Ashish Kalra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Ashish Kalra <ashish.kalra@....com>,
"Borislav Petkov (AMD)" <bp@...en8.de>,
Tom Lendacky <thomas.lendacky@....com>, Srikanth Aithal <sraithal@....com>,
stable@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org
Subject:
[tip: x86/urgent] x86/sev: Make sure pages are not skipped during kdump
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 82b7f88f2316c5442708daeb0b5ec5aa54c8ff7f
Gitweb: https://git.kernel.org/tip/82b7f88f2316c5442708daeb0b5ec5aa54c8ff7f
Author: Ashish Kalra <ashish.kalra@....com>
AuthorDate: Tue, 06 May 2025 18:35:29
Committer: Borislav Petkov (AMD) <bp@...en8.de>
CommitterDate: Tue, 13 May 2025 19:47:48 +02:00
x86/sev: Make sure pages are not skipped during kdump
When shared pages are being converted to private during kdump, additional
checks are performed. They include handling the case of a GHCB page being
contained within a huge page.
Currently, this check incorrectly skips a page just below the GHCB page from
being transitioned back to private during kdump preparation.
This skipped page causes a 0x404 #VC exception when it is accessed later while
dumping guest memory for vmcore generation.
Correct the range to be checked for GHCB contained in a huge page. Also,
ensure that the skipped huge page containing the GHCB page is transitioned
back to private by applying the correct address mask later when changing GHCBs
to private at end of kdump preparation.
[ bp: Massage commit message. ]
Fixes: 3074152e56c9 ("x86/sev: Convert shared memory back to private on kexec")
Signed-off-by: Ashish Kalra <ashish.kalra@....com>
Signed-off-by: Borislav Petkov (AMD) <bp@...en8.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@....com>
Tested-by: Srikanth Aithal <sraithal@....com>
Cc: stable@...r.kernel.org
Link: https://lore.kernel.org/20250506183529.289549-1-Ashish.Kalra@amd.com
---
arch/x86/coco/sev/core.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index 41060ba..36beaac 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -1101,7 +1101,8 @@ static void unshare_all_memory(void)
data = per_cpu(runtime_data, cpu);
ghcb = (unsigned long)&data->ghcb_page;
- if (addr <= ghcb && ghcb <= addr + size) {
+ /* Handle the case of a huge page containing the GHCB page */
+ if (addr <= ghcb && ghcb < addr + size) {
skipped_addr = true;
break;
}
@@ -1213,8 +1214,8 @@ static void shutdown_all_aps(void)
void snp_kexec_finish(void)
{
struct sev_es_runtime_data *data;
+ unsigned long size, addr;
unsigned int level, cpu;
- unsigned long size;
struct ghcb *ghcb;
pte_t *pte;
@@ -1242,8 +1243,10 @@ void snp_kexec_finish(void)
ghcb = &data->ghcb_page;
pte = lookup_address((unsigned long)ghcb, &level);
size = page_level_size(level);
- set_pte_enc(pte, level, (void *)ghcb);
- snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE));
+ /* Handle the case of a huge page containing the GHCB page */
+ addr = (unsigned long)ghcb & page_level_mask(level);
+ set_pte_enc(pte, level, (void *)addr);
+ snp_set_memory_private(addr, (size / PAGE_SIZE));
}
}
Powered by blists - more mailing lists