lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <21cbc2822aa18aa812c0215f4231dbf5f65afa7f.1469249789.git.luto@kernel.org>
Date:	Fri, 22 Jul 2016 21:58:08 -0700
From:	Andy Lutomirski <luto@...nel.org>
To:	x86@...nel.org
Cc:	Borislav Petkov <bp@...en8.de>, linux-kernel@...r.kernel.org,
	Brian Gerst <brgerst@...il.com>,
	Andy Lutomirski <luto@...nel.org>,
	Mike Krinkin <krinkin.m.u@...il.com>
Subject: [PATCH] x86/mm/cpa: Unbreak populate_pgd(): stop trying to deallocate failed PUDs

This mostly reverts commit 360cb4d15567a7eca07a5f3ade6de308bbfb4e70.

I broke the case where a PUD table got allocated -- populate_pud()
would wander off a pgd_none entry and get lost.  I'm not sure how
this survived my testing.

Fixing this directly is difficult or impossible because of the awful
state of Linux's page table accessors.

Instead, fix the original issue in a much simpler way.  The problem
was that, if we allocated a PUD table, failed to populate it, and
freed it, another CPU could potentially keep using the PGD entry we
installed (either by copying it via vmalloc_fault or by speculatively
caching it).  There's a straightforward fix: simply leave the
top-level entry in place if this happens.  This can't waste any
significant amount of memory -- there are at most 256 entries like
this systemwide and, as a practical matter, if we hit this failure
path repeatedly, we're likely to reuse the same page anyway.

For context, this is a reversion with this hunk added in:

	if (ret < 0) {
+		/*
+		 * Leave the PUD page in place in case some other CPU or thread
+		 * already found it, but remove any useless entries we just
+		 * added to it.
+		 */
-		unmap_pgd_range(cpa->pgd, addr,
+		unmap_pud_range(pgd_entry, addr,
			        addr + (cpa->numpages << PAGE_SHIFT));
		return ret;
	}

This effectively open-codes what the now-deleted unmap_pgd_range()
function used to do except that unmap_pgd_range() used to try to
free the page as well.

Cc: Mike Krinkin <krinkin.m.u@...il.com>
Reported-by: Valdis Kletnieks <Valdis.Kletnieks@...edu>
Signed-off-by: Andy Lutomirski <luto@...nel.org>
---
 arch/x86/mm/pageattr.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 26c93c6e04a0..2bc6ea153f76 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1082,6 +1082,8 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
 		pud = (pud_t *)get_zeroed_page(GFP_KERNEL | __GFP_NOTRACK);
 		if (!pud)
 			return -1;
+
+		set_pgd(pgd_entry, __pgd(__pa(pud) | _KERNPG_TABLE));
 	}
 
 	pgprot_val(pgprot) &= ~pgprot_val(cpa->mask_clr);
@@ -1089,16 +1091,11 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
 
 	ret = populate_pud(cpa, addr, pgd_entry, pgprot);
 	if (ret < 0) {
-		if (pud)
-			free_page((unsigned long)pud);
 		unmap_pud_range(pgd_entry, addr,
 				addr + (cpa->numpages << PAGE_SHIFT));
 		return ret;
 	}
 
-	if (pud)
-		set_pgd(pgd_entry, __pgd(__pa(pud) | _KERNPG_TABLE));
-
 	cpa->numpages = ret;
 	return 0;
 }
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ