[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250414173235.F63F50D1@davehans-spike.ostc.intel.com>
Date: Mon, 14 Apr 2025 10:32:35 -0700
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org,tglx@...utronix.de,bp@...en8.de,joro@...tes.org,luto@...nel.org,peterz@...radead.org,kirill.shutemov@...ux.intel.com,rick.p.edgecombe@...el.com,jgross@...e.com,Dave Hansen <dave.hansen@...ux.intel.com>
Subject: [PATCH 2/8] x86/mm: Always "broadcast" PMD setting operations
From: Dave Hansen <dave.hansen@...ux.intel.com>
Kernel PMDs can either be shared across processes or private to a
process. On 64-bit, they are always shared. 32-bit non-PAE hardware
does not have PMDs, but the kernel logically squishes them into the
PGD and treats them as private. Here are the four cases:
64-bit: Shared
32-bit: non-PAE: Private
32-bit: PAE+ PTI: Private
32-bit: PAE+noPTI: Shared
Note that 32-bit is all "Private" except for PAE+noPTI being an
oddball. The 32-bit+PAE+noPTI case will be made like the rest of
32-bit shortly.
But until that can be done, temporarily treat the 32-bit+PAE+noPTI
case as Private. This will do unnecessary walks across pgd_list and
unnecessary PTE setting but should be otherwise harmless.
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
---
b/arch/x86/mm/pat/set_memory.c | 4 ++--
b/arch/x86/mm/pgtable.c | 11 +++--------
2 files changed, 5 insertions(+), 10 deletions(-)
diff -puN arch/x86/mm/pat/set_memory.c~always-sync-kernel-mapping-updates arch/x86/mm/pat/set_memory.c
--- a/arch/x86/mm/pat/set_memory.c~always-sync-kernel-mapping-updates 2025-04-09 12:00:17.126319212 -0700
+++ b/arch/x86/mm/pat/set_memory.c 2025-04-09 12:53:28.082212490 -0700
@@ -889,7 +889,7 @@ static void __set_pmd_pte(pte_t *kpte, u
/* change init_mm */
set_pte_atomic(kpte, pte);
#ifdef CONFIG_X86_32
- if (!SHARED_KERNEL_PMD) {
+ {
struct page *page;
list_for_each_entry(page, &pgd_list, lru) {
@@ -1293,7 +1293,7 @@ static int collapse_pmd_page(pmd_t *pmd,
/* Queue the page table to be freed after TLB flush */
list_add(&page_ptdesc(pmd_page(old_pmd))->pt_list, pgtables);
- if (IS_ENABLED(CONFIG_X86_32) && !SHARED_KERNEL_PMD) {
+ if (IS_ENABLED(CONFIG_X86_32)) {
struct page *page;
/* Update all PGD tables to use the same large page */
diff -puN arch/x86/mm/pgtable.c~always-sync-kernel-mapping-updates arch/x86/mm/pgtable.c
--- a/arch/x86/mm/pgtable.c~always-sync-kernel-mapping-updates 2025-04-09 12:00:17.128319285 -0700
+++ b/arch/x86/mm/pgtable.c 2025-04-09 12:53:09.217519767 -0700
@@ -97,18 +97,13 @@ static void pgd_ctor(struct mm_struct *m
KERNEL_PGD_PTRS);
}
- /* list required to sync kernel mapping updates */
- if (!SHARED_KERNEL_PMD) {
- pgd_set_mm(pgd, mm);
- pgd_list_add(pgd);
- }
+ /* List used to sync kernel mapping updates */
+ pgd_set_mm(pgd, mm);
+ pgd_list_add(pgd);
}
static void pgd_dtor(pgd_t *pgd)
{
- if (SHARED_KERNEL_PMD)
- return;
-
spin_lock(&pgd_lock);
pgd_list_del(pgd);
spin_unlock(&pgd_lock);
_
Powered by blists - more mailing lists