lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221104220053.1702977-3-seanjc@google.com>
Date:   Fri,  4 Nov 2022 22:00:53 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org,
        Andrey Ryabinin <ryabinin.a.a@...il.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Dmitry Vyukov <dvyukov@...gle.com>
Subject: [PATCH 2/2] x86/mm: Populate KASAN shadow for per-CPU DS buffers in
 CPU entry area

Bounce through cea_map_percpu_pages() when setting the initial
protections for per-CPU DS buffers so that KASAN populates a shadow for
said mapping.  Failure to populate the shadow will result in a
not-present #PF during KASAN validation if DS buffers are activated
later on.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Cc: Andrey Ryabinin <ryabinin.a.a@...il.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
 arch/x86/mm/cpu_entry_area.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d831aae94b41..64ae557ceb22 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -91,13 +91,12 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
 static void __init
 cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
 {
-	phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
+	int nid = ptr ? early_pfn_to_nid(PFN_DOWN(per_cpu_ptr_to_phys(ptr))) : 0;
 
-	kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
-					early_pfn_to_nid(PFN_DOWN(pa)));
+	kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE, nid);
 
 	for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
-		cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
+		cea_set_pte(cea_vaddr, ptr ? per_cpu_ptr_to_phys(ptr) : 0, prot);
 }
 
 static void __init percpu_setup_debug_store(unsigned int cpu)
@@ -121,8 +120,7 @@ static void __init percpu_setup_debug_store(unsigned int cpu)
 	 * memory like debug store buffers.
 	 */
 	npages = sizeof(struct debug_store_buffers) / PAGE_SIZE;
-	for (; npages; npages--, cea += PAGE_SIZE)
-		cea_set_pte(cea, 0, PAGE_NONE);
+	cea_map_percpu_pages(cea, NULL, npages, PAGE_NONE);
 #endif
 }
 
-- 
2.38.1.431.g37b22c650d-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ