[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221104220053.1702977-2-seanjc@google.com>
Date: Fri, 4 Nov 2022 22:00:52 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Sean Christopherson <seanjc@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>
Subject: [PATCH 1/2] x86/mm: Recompute physical address for every page of
per-CPU CEA mapping
Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.
Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/mm/cpu_entry_area.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001e5e12..d831aae94b41 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
early_pfn_to_nid(PFN_DOWN(pa)));
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, pa, prot);
+ cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
}
static void __init percpu_setup_debug_store(unsigned int cpu)
--
2.38.1.431.g37b22c650d-goog
Powered by blists - more mailing lists