[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231031195049.2075561-1-steve.wahl@hpe.com>
Date: Tue, 31 Oct 2023 14:50:49 -0500
From: Steve Wahl <steve.wahl@....com>
To: Steve Wahl <steve.wahl@....com>, rja_direct@...ups.int.hpe.com,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH] x86/mm/ident_map: Use gbpages only where full GB page should be mapped.
Instead of using gbpages for all memory regions, use them only when
map creation requests include the full GB page of space; descend to
using smaller 2M pages when only portions of a GB page are requested.
When gbpages are used exclusively to create identity maps, large
ranges of addresses not actually requested can be included in the
resulting table. On UV systems, this ends up including regions that
will cause hardware to halt the system if accessed (these are marked
"reserved" by BIOS). Even though code does not actually make
references to these addresses, including them in an active map allows
processor speculation into this region, which is enough to trigger the
system halt.
The kernel option "nogbpages" will disallow use of gbpages entirely
and avoid this problem, but uses a lot of extra memory for page tables
that are not really needed.
Signed-off-by: Steve Wahl <steve.wahl@....com>
---
Please ignore previous send with internal message. Thanks.
arch/x86/mm/ident_map.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 968d7005f4a7..b63a1ffcfe9f 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -31,18 +31,26 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
if (next > end)
next = end;
- if (info->direct_gbpages) {
+ /*
+ * if gbpages allowed, this entry not yet present, and
+ * the full gbpage range is requested (both ends are
+ * correctly aligned), create a gbpage.
+ */
+ if (info->direct_gbpages
+ && !pud_present(*pud)
+ && !(addr & ~PUD_MASK)
+ && !(next & ~PUD_MASK)) {
pud_t pudval;
- if (pud_present(*pud))
- continue;
-
- addr &= PUD_MASK;
pudval = __pud((addr - info->offset) | info->page_flag);
set_pud(pud, pudval);
continue;
}
+ /* if this is already a gbpage, this portion is already mapped */
+ if (pud_large(*pud))
+ continue;
+
if (pud_present(*pud)) {
pmd = pmd_offset(pud, 0);
ident_pmd_init(info, pmd, addr, next);
--
2.26.2
Powered by blists - more mailing lists