[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250418155614.8F925958@davehans-spike.ostc.intel.com>
Date: Fri, 18 Apr 2025 08:56:14 -0700
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org,tglx@...utronix.de,bp@...en8.de,joro@...tes.org,luto@...nel.org,peterz@...radead.org,kirill.shutemov@...ux.intel.com,rick.p.edgecombe@...el.com,jgross@...e.com,Dave Hansen <dave.hansen@...ux.intel.com>
Subject: [PATCH 1/2] x86/mm: Kill a 32-bit #ifdef for shared PMD handling
From: Dave Hansen <dave.hansen@...ux.intel.com>
This block of code used to be:
if (SHARED_KERNEL_PMD)
But it was zapped when 32-bit kernels transitioned to private
(non-shared) PMDs. It also made it rather unclear what the block
of code is doing in the first place.
Remove the #ifdef and replace it with IS_ENABLED(). Unindent the
code block and add an actually useful comment about what it is
doing.
Suggested-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
---
b/arch/x86/mm/pat/set_memory.c | 41 +++++++++++++++++++++--------------------
1 file changed, 21 insertions(+), 20 deletions(-)
diff -puN arch/x86/mm/pat/set_memory.c~kill-CONFIG_X86_32-ifdef arch/x86/mm/pat/set_memory.c
--- a/arch/x86/mm/pat/set_memory.c~kill-CONFIG_X86_32-ifdef 2025-04-18 08:37:32.149932662 -0700
+++ b/arch/x86/mm/pat/set_memory.c 2025-04-18 08:37:32.152932772 -0700
@@ -881,31 +881,32 @@ phys_addr_t slow_virt_to_phys(void *__vi
}
EXPORT_SYMBOL_GPL(slow_virt_to_phys);
-/*
- * Set the new pmd in all the pgds we know about:
- */
static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
{
+ struct page *page;
+
/* change init_mm */
set_pte_atomic(kpte, pte);
-#ifdef CONFIG_X86_32
- {
- struct page *page;
-
- list_for_each_entry(page, &pgd_list, lru) {
- pgd_t *pgd;
- p4d_t *p4d;
- pud_t *pud;
- pmd_t *pmd;
-
- pgd = (pgd_t *)page_address(page) + pgd_index(address);
- p4d = p4d_offset(pgd, address);
- pud = pud_offset(p4d, address);
- pmd = pmd_offset(pud, address);
- set_pte_atomic((pte_t *)pmd, pte);
- }
+
+ if (IS_ENABLED(CONFIG_X86_64))
+ return;
+
+ /*
+ * 32-bit mm_structs don't share kernel PMD pages.
+ * Propagate the change to each relevant PMD entry:
+ */
+ list_for_each_entry(page, &pgd_list, lru) {
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ pgd = (pgd_t *)page_address(page) + pgd_index(address);
+ p4d = p4d_offset(pgd, address);
+ pud = pud_offset(p4d, address);
+ pmd = pmd_offset(pud, address);
+ set_pte_atomic((pte_t *)pmd, pte);
}
-#endif
}
static pgprot_t pgprot_clear_protnone_bits(pgprot_t prot)
_
Powered by blists - more mailing lists