lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Aug 2018 14:45:17 -0700
From:   Tony Luck <tony.luck@...el.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Tony Luck <tony.luck@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...en8.de>,
        linux-edac@...r.kernel.org, linux-kernel@...r.kernel.org,
        x86@...nel.org, Dan Williams <dan.j.williams@...el.com>,
        Dave Jiang <dave.jiang@...el.com>
Subject: [PATCH] x86/mce: Fix set_mce_nospec() to avoid #GP fault

The trick with flipping bit 63 to avoid loading the address of the
1:1 mapping of the poisoned page while we update the 1:1 map used
to work when we wanted to unmap the page. But it falls down horribly
when we try to directly set the page as uncacheable.

The problem is that when we change the cache mode to uncachable we
try to flush the page from the cache. But the decoy address is
non-canonical, and the CLFLUSH instruction throws a #GP fault.

Fix is to move one step at a time. First mark the page not present
(using the decoy address). Then it is safe to use the actual address
of the 1:1 mapping to mark it "uc", and finally as present.

Fixes: 284ce4011ba6 ("x86/memory_failure: Introduce {set, clear}_mce_nospec()")
Signed-off-by: Tony Luck <tony.luck@...el.com>
---

Maybe this is horrible. Other suggestions gratefully received.

 arch/x86/include/asm/set_memory.h | 23 +++++++++++++++++++++--
 arch/x86/mm/pageattr.c            |  5 +++++
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 07a25753e85c..e876860988bf 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -43,6 +43,7 @@ int set_memory_wc(unsigned long addr, int numpages);
 int set_memory_wt(unsigned long addr, int numpages);
 int set_memory_wb(unsigned long addr, int numpages);
 int set_memory_np(unsigned long addr, int numpages);
+int set_memory_p(unsigned long addr, int numpages);
 int set_memory_4k(unsigned long addr, int numpages);
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
@@ -111,9 +112,27 @@ static inline int set_mce_nospec(unsigned long pfn)
 	 */
 	decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
 
-	rc = set_memory_uc(decoy_addr, 1);
-	if (rc)
+	rc = set_memory_np(decoy_addr, 1);
+	if (rc) {
 		pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+		return rc;
+	}
+
+	native_cpuid_eax(0);
+
+	/* Now safe to use the virtual address in the 1:1 map */
+	rc = set_memory_uc((unsigned long)pfn_to_kaddr(pfn), 1);
+	if (rc) {
+		pr_warn("Could not set pfn=0x%lx uncacheable in 1:1 map\n", pfn);
+		return rc;
+	}
+
+	rc = set_memory_p((unsigned long)pfn_to_kaddr(pfn), 1);
+	if (rc) {
+		pr_warn("Could not remap pfn=0x%lx uncacheable in 1:1 map\n", pfn);
+		return rc;
+	}
+
 	return rc;
 }
 #define set_mce_nospec set_mce_nospec
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 8d6c34fe49be..87400351c5a0 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1776,6 +1776,11 @@ int set_memory_np(unsigned long addr, int numpages)
 	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
 }
 
+int set_memory_p(unsigned long addr, int numpages)
+{
+	return change_page_attr_set(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
+}
+
 int set_memory_np_noalias(unsigned long addr, int numpages)
 {
 	int cpa_flags = CPA_NO_CHECK_ALIAS;
-- 
2.17.1

Powered by blists - more mailing lists