[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70EB1774-A782-47FB-A8EA-534E66A551F6@zytor.com>
Date: Thu, 27 Apr 2023 11:40:02 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Anthony Yznaga <anthony.yznaga@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
CC: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, x86@...nel.org,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
rppt@...nel.org, akpm@...ux-foundation.org, ebiederm@...ssion.com,
keescook@...omium.org, graf@...zon.com, jason.zeng@...el.com,
lei.l.li@...el.com, steven.sistare@...cle.com,
fam.zheng@...edance.com, mgalaxy@...mai.com,
kexec@...ts.infradead.org
Subject: Re: [RFC v3 21/21] x86/boot/compressed/64: use 1GB pages for mappings
On April 26, 2023 5:08:57 PM PDT, Anthony Yznaga <anthony.yznaga@...cle.com> wrote:
>pkram kaslr code can incur multiple page faults when it walks its
>preserved ranges list called via mem_avoid_overlap(). The multiple
>faults can easily end up using up the small number of pages available
>to be allocated for page table pages.
>
>This patch hacks things so that mappings are 1GB which results in the need
>for far fewer page table pages. As is this breaks AMD SEV-ES which expects
>the mappings to be 2M. This could possibly be fixed by updating split
>code to split 1GB page if the aren't any other issues with using 1GB
>mappings.
>
>Signed-off-by: Anthony Yznaga <anthony.yznaga@...cle.com>
>---
> arch/x86/boot/compressed/ident_map_64.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
>diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
>index 321a5011042d..1e02cf6dda3c 100644
>--- a/arch/x86/boot/compressed/ident_map_64.c
>+++ b/arch/x86/boot/compressed/ident_map_64.c
>@@ -95,8 +95,8 @@ void kernel_add_identity_map(unsigned long start, unsigned long end)
> int ret;
>
> /* Align boundary to 2M. */
>- start = round_down(start, PMD_SIZE);
>- end = round_up(end, PMD_SIZE);
>+ start = round_down(start, PUD_SIZE);
>+ end = round_up(end, PUD_SIZE);
> if (start >= end)
> return;
>
>@@ -120,6 +120,7 @@ void initialize_identity_maps(void *rmode)
> mapping_info.context = &pgt_data;
> mapping_info.page_flag = __PAGE_KERNEL_LARGE_EXEC | sme_me_mask;
> mapping_info.kernpg_flag = _KERNPG_TABLE;
>+ mapping_info.direct_gbpages = true;
>
> /*
> * It should be impossible for this not to already be true,
>@@ -365,8 +366,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
>
> ghcb_fault = sev_es_check_ghcb_fault(address);
>
>- address &= PMD_MASK;
>- end = address + PMD_SIZE;
>+ address &= PUD_MASK;
>+ end = address + PUD_SIZE;
>
> /*
> * Check for unexpected error codes. Unexpected are:
Strong NAK: 1G pages are not supported by all 64-bit CPUs, *and* by your own admission breaks things ...
Powered by blists - more mailing lists