lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231230161954.569267-12-michael.roth@amd.com>
Date: Sat, 30 Dec 2023 10:19:39 -0600
From: Michael Roth <michael.roth@....com>
To: <x86@...nel.org>
CC: <kvm@...r.kernel.org>, <linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
	<linux-crypto@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<tglx@...utronix.de>, <mingo@...hat.com>, <jroedel@...e.de>,
	<thomas.lendacky@....com>, <hpa@...or.com>, <ardb@...nel.org>,
	<pbonzini@...hat.com>, <seanjc@...gle.com>, <vkuznets@...hat.com>,
	<jmattson@...gle.com>, <luto@...nel.org>, <dave.hansen@...ux.intel.com>,
	<slp@...hat.com>, <pgonda@...gle.com>, <peterz@...radead.org>,
	<srinivas.pandruvada@...ux.intel.com>, <rientjes@...gle.com>,
	<tobin@....com>, <bp@...en8.de>, <vbabka@...e.cz>, <kirill@...temov.name>,
	<ak@...ux.intel.com>, <tony.luck@...el.com>,
	<sathyanarayanan.kuppuswamy@...ux.intel.com>, <alpergun@...gle.com>,
	<jarkko@...nel.org>, <ashish.kalra@....com>, <nikunj.dadhania@....com>,
	<pankaj.gupta@....com>, <liam.merwick@...cle.com>, <zhi.a.wang@...el.com>,
	Brijesh Singh <brijesh.singh@....com>
Subject: [PATCH v1 11/26] x86/sev: Invalidate pages from the direct map when adding them to the RMP table

From: Brijesh Singh <brijesh.singh@....com>

The integrity guarantee of SEV-SNP is enforced through the RMP table.
The RMP is used with standard x86 and IOMMU page tables to enforce
memory restrictions and page access rights. The RMP check is enforced as
soon as SEV-SNP is enabled globally in the system. When hardware
encounters an RMP-check failure, it raises a page-fault exception.

If the kernel uses a 2MB directmap mapping to write to an address, and
that 2MB range happens to contain a 4KB page that set to private in the
RMP table, that will also lead to a page-fault exception.

Prevent this by removing pages from the directmap prior to setting them
as private in the RMP table, and then re-adding them to the directmap
when they set back to shared.

Signed-off-by: Brijesh Singh <brijesh.singh@....com>
Co-developed-by: Ashish Kalra <ashish.kalra@....com>
Signed-off-by: Ashish Kalra <ashish.kalra@....com>
Signed-off-by: Michael Roth <michael.roth@....com>
---
 arch/x86/virt/svm/sev.c | 58 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index ff9fa0a85a7f..ee182351d93a 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -369,14 +369,64 @@ int psmash(u64 pfn)
 }
 EXPORT_SYMBOL_GPL(psmash);
 
+static int restore_direct_map(u64 pfn, int npages)
+{
+	int i, ret = 0;
+
+	for (i = 0; i < npages; i++) {
+		ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
+		if (ret)
+			break;
+	}
+
+	if (ret)
+		pr_warn("Failed to restore direct map for pfn 0x%llx, ret: %d\n",
+			pfn + i, ret);
+
+	return ret;
+}
+
+static int invalidate_direct_map(u64 pfn, int npages)
+{
+	int i, ret = 0;
+
+	for (i = 0; i < npages; i++) {
+		ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i));
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		pr_warn("Failed to invalidate direct map for pfn 0x%llx, ret: %d\n",
+			pfn + i, ret);
+		restore_direct_map(pfn, i);
+	}
+
+	return ret;
+}
+
 static int rmpupdate(u64 pfn, struct rmp_state *state)
 {
 	unsigned long paddr = pfn << PAGE_SHIFT;
-	int ret;
+	int ret, level, npages;
 
 	if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
 		return -ENODEV;
 
+	level = RMP_TO_PG_LEVEL(state->pagesize);
+	npages = page_level_size(level) / PAGE_SIZE;
+
+	/*
+	 * If the kernel uses a 2MB directmap mapping to write to an address,
+	 * and that 2MB range happens to contain a 4KB page that set to private
+	 * in the RMP table, an RMP #PF will trigger and cause a host crash.
+	 *
+	 * Prevent this by removing pages from the directmap prior to setting
+	 * them as private in the RMP table.
+	 */
+	if (state->assigned && invalidate_direct_map(pfn, npages))
+		return -EFAULT;
+
 	do {
 		/* Binutils version 2.36 supports the RMPUPDATE mnemonic. */
 		asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE"
@@ -386,12 +436,16 @@ static int rmpupdate(u64 pfn, struct rmp_state *state)
 	} while (ret == RMPUPDATE_FAIL_OVERLAP);
 
 	if (ret) {
-		pr_err("RMPUPDATE failed for PFN %llx, ret: %d\n", pfn, ret);
+		pr_err("RMPUPDATE failed for PFN %llx, pg_level: %d, ret: %d\n",
+		       pfn, level, ret);
 		dump_rmpentry(pfn);
 		dump_stack();
 		return -EFAULT;
 	}
 
+	if (!state->assigned && restore_direct_map(pfn, npages))
+		return -EFAULT;
+
 	return 0;
 }
 
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ