lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200806225651.GC24304@suse.de>
Date:   Fri, 7 Aug 2020 00:56:51 +0200
From:   Joerg Roedel <jroedel@...e.de>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [GIT PULL] x86/mm changes for v5.9

On Wed, Aug 05, 2020 at 01:03:48PM +0200, Jason A. Donenfeld wrote:
> BUG: unable to handle page fault for address: ffffe8ffffd00608

Okay, looks like my usage of the page-table macros is bogus, seems I
don't understand their usage as good as I thought. The p?d_none checks
in the allocation path are wrong and led to the bug. In fact it caused
only the first PUD entry to be allocated and in the later iterations of
the loop it always ended up on the same PUD entry.

I still don't fully understand why, but its late here and my head spins.
So I take another look tomorrow in the hope to understand it better.
Please remind me to not take vacation again during merge windows :)

Anyway...

Jason, does the attached diff fix the issue in your testing? For me it
makes all PUD pages correctly allocated.

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index c7a47603537f..e4abf73167d0 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -696,6 +696,7 @@ static void __init memory_map_bottom_up(unsigned long map_start,
 static void __init init_trampoline(void)
 {
 #ifdef CONFIG_X86_64
+
 	if (!kaslr_memory_enabled())
 		trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 	else
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index e65b96f381a7..351fac590b02 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1248,27 +1248,23 @@ static void __init preallocate_vmalloc_pages(void)
 		p4d_t *p4d;
 		pud_t *pud;
 
-		p4d = p4d_offset(pgd, addr);
-		if (p4d_none(*p4d)) {
-			/* Can only happen with 5-level paging */
-			p4d = p4d_alloc(&init_mm, pgd, addr);
-			if (!p4d) {
-				lvl = "p4d";
-				goto failed;
-			}
+		p4d = p4d_alloc(&init_mm, pgd, addr);
+		if (!p4d) {
+			lvl = "p4d";
+			goto failed;
 		}
 
 		if (pgtable_l5_enabled())
 			continue;
 
-		pud = pud_offset(p4d, addr);
-		if (pud_none(*pud)) {
-			/* Ends up here only with 4-level paging */
-			pud = pud_alloc(&init_mm, p4d, addr);
-			if (!pud) {
-				lvl = "pud";
-				goto failed;
-			}
+		/*
+		 * With 4-level paging the P4D is folded, so allocate a
+		 * PUD to have one level below PGD present.
+		 */
+		pud = pud_alloc(&init_mm, p4d, addr);
+		if (!pud) {
+			lvl = "pud";
+			goto failed;
 		}
 	}
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ