lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <172563937606.2215.16977108983055159109.tip-bot2@tip-bot2>
Date: Fri, 06 Sep 2024 16:16:16 -0000
From: "tip-bot2 for Aaron Lu" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Molina Sabido, Gerardo" <gerardo.molina.sabido@...el.com>,
 Aaron Lu <aaron.lu@...el.com>, Dave Hansen <dave.hansen@...ux.intel.com>,
 Kai Huang <kai.huang@...el.com>, Jarkko Sakkinen <jarkko@...nel.org>,
 Zhimin Luo <zhimin.luo@...el.com>, x86@...nel.org,
 linux-kernel@...r.kernel.org
Subject: [tip: x86/sgx] x86/sgx: Fix deadlock in SGX NUMA node search

The following commit has been merged into the x86/sgx branch of tip:

Commit-ID:     9c936844010466535bd46ea4ce4656ef17653644
Gitweb:        https://git.kernel.org/tip/9c936844010466535bd46ea4ce4656ef17653644
Author:        Aaron Lu <aaron.lu@...el.com>
AuthorDate:    Thu, 05 Sep 2024 16:08:54 +08:00
Committer:     Dave Hansen <dave.hansen@...ux.intel.com>
CommitterDate: Thu, 05 Sep 2024 15:20:47 -07:00

x86/sgx: Fix deadlock in SGX NUMA node search

When the current node doesn't have an EPC section configured by firmware
and all other EPC sections are used up, CPU can get stuck inside the
while loop that looks for an available EPC page from remote nodes
indefinitely, leading to a soft lockup. Note how nid_of_current will
never be equal to nid in that while loop because nid_of_current is not
set in sgx_numa_mask.

Also worth mentioning is that it's perfectly fine for the firmware not
to setup an EPC section on a node. While setting up an EPC section on
each node can enhance performance, it is not a requirement for
functionality.

Rework the loop to start and end on *a* node that has SGX memory. This
avoids the deadlock looking for the current SGX-lacking node to show up
in the loop when it never will.

Fixes: 901ddbb9ecf5 ("x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()")
Reported-by: "Molina Sabido, Gerardo" <gerardo.molina.sabido@...el.com>
Signed-off-by: Aaron Lu <aaron.lu@...el.com>
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Reviewed-by: Kai Huang <kai.huang@...el.com>
Reviewed-by: Jarkko Sakkinen <jarkko@...nel.org>
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Tested-by: Zhimin Luo <zhimin.luo@...el.com>
Link: https://lore.kernel.org/all/20240905080855.1699814-2-aaron.lu%40intel.com
---
 arch/x86/kernel/cpu/sgx/main.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 27892e5..6aeeb43 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -475,24 +475,25 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void)
 {
 	struct sgx_epc_page *page;
 	int nid_of_current = numa_node_id();
-	int nid = nid_of_current;
+	int nid_start, nid;
 
-	if (node_isset(nid_of_current, sgx_numa_mask)) {
-		page = __sgx_alloc_epc_page_from_node(nid_of_current);
-		if (page)
-			return page;
-	}
-
-	/* Fall back to the non-local NUMA nodes: */
-	while (true) {
-		nid = next_node_in(nid, sgx_numa_mask);
-		if (nid == nid_of_current)
-			break;
+	/*
+	 * Try local node first. If it doesn't have an EPC section,
+	 * fall back to the non-local NUMA nodes.
+	 */
+	if (node_isset(nid_of_current, sgx_numa_mask))
+		nid_start = nid_of_current;
+	else
+		nid_start = next_node_in(nid_of_current, sgx_numa_mask);
 
+	nid = nid_start;
+	do {
 		page = __sgx_alloc_epc_page_from_node(nid);
 		if (page)
 			return page;
-	}
+
+		nid = next_node_in(nid, sgx_numa_mask);
+	} while (nid != nid_start);
 
 	return ERR_PTR(-ENOMEM);
 }

Powered by blists - more mailing lists