[<prev] [next>] [day] [month] [year] [list]
Message-ID: <174491300802.31282.1389796894467957175.tip-bot2@tip-bot2>
Date: Thu, 17 Apr 2025 18:03:28 -0000
From: "tip-bot2 for Dave Hansen" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: x86/mm] x86/mm: Always "broadcast" PMD setting operations
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: b0cc4d19f198cdfd1b58c8f5536670d1dc68cbbd
Gitweb: https://git.kernel.org/tip/b0cc4d19f198cdfd1b58c8f5536670d1dc68cbbd
Author: Dave Hansen <dave.hansen@...ux.intel.com>
AuthorDate: Mon, 14 Apr 2025 10:32:35 -07:00
Committer: Dave Hansen <dave.hansen@...ux.intel.com>
CommitterDate: Thu, 17 Apr 2025 10:39:25 -07:00
x86/mm: Always "broadcast" PMD setting operations
Kernel PMDs can either be shared across processes or private to a
process. On 64-bit, they are always shared. 32-bit non-PAE hardware
does not have PMDs, but the kernel logically squishes them into the
PGD and treats them as private. Here are the four cases:
64-bit: Shared
32-bit: non-PAE: Private
32-bit: PAE+ PTI: Private
32-bit: PAE+noPTI: Shared
Note that 32-bit is all "Private" except for PAE+noPTI being an
oddball. The 32-bit+PAE+noPTI case will be made like the rest of
32-bit shortly.
But until that can be done, temporarily treat the 32-bit+PAE+noPTI
case as Private. This will do unnecessary walks across pgd_list and
unnecessary PTE setting but should be otherwise harmless.
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Link: https://lore.kernel.org/all/20250414173235.F63F50D1%40davehans-spike.ostc.intel.com
---
arch/x86/mm/pat/set_memory.c | 4 ++--
arch/x86/mm/pgtable.c | 11 +++--------
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index def3d92..30ab4ac 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -889,7 +889,7 @@ static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
/* change init_mm */
set_pte_atomic(kpte, pte);
#ifdef CONFIG_X86_32
- if (!SHARED_KERNEL_PMD) {
+ {
struct page *page;
list_for_each_entry(page, &pgd_list, lru) {
@@ -1293,7 +1293,7 @@ static int collapse_pmd_page(pmd_t *pmd, unsigned long addr,
/* Queue the page table to be freed after TLB flush */
list_add(&page_ptdesc(pmd_page(old_pmd))->pt_list, pgtables);
- if (IS_ENABLED(CONFIG_X86_32) && !SHARED_KERNEL_PMD) {
+ if (IS_ENABLED(CONFIG_X86_32)) {
struct page *page;
/* Update all PGD tables to use the same large page */
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index ea01b55..f1c5886 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -97,18 +97,13 @@ static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)
KERNEL_PGD_PTRS);
}
- /* list required to sync kernel mapping updates */
- if (!SHARED_KERNEL_PMD) {
- pgd_set_mm(pgd, mm);
- pgd_list_add(pgd);
- }
+ /* List used to sync kernel mapping updates */
+ pgd_set_mm(pgd, mm);
+ pgd_list_add(pgd);
}
static void pgd_dtor(pgd_t *pgd)
{
- if (SHARED_KERNEL_PMD)
- return;
-
spin_lock(&pgd_lock);
pgd_list_del(pgd);
spin_unlock(&pgd_lock);
Powered by blists - more mailing lists