lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250604193947.11834-8-yury.norov@gmail.com>
Date: Wed,  4 Jun 2025 15:39:43 -0400
From: Yury Norov <yury.norov@...il.com>
To: Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
	Jason Gunthorpe <jgg@...pe.ca>,
	Leon Romanovsky <leon@...nel.org>,
	Yury Norov <yury.norov@...il.com>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	linux-rdma@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH 7/7] RDMI: hfi1: drop cpumask_empty() call in hfi1/affinity.c

From: "Yury Norov [NVIDIA]" <yury.norov@...il.com>

In few places, the driver tests a cpumask for emptiness immediately
before calling functions that report emptiness themself.

Signed-off-by: Yury Norov [NVIDIA] <yury.norov@...il.com>
---
 drivers/infiniband/hw/hfi1/affinity.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index 8974aa1e63d1..ee7fedc67b86 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -337,9 +337,10 @@ static int _dev_comp_vect_cpu_get(struct hfi1_devdata *dd,
 		       &entry->def_intr.used);
 
 	/* If there are non-interrupt CPUs available, use them first */
-	if (!cpumask_empty(non_intr_cpus))
-		cpu = cpumask_first(non_intr_cpus);
-	else /* Otherwise, use interrupt CPUs */
+	cpu = cpumask_first(non_intr_cpus);
+
+	/* Otherwise, use interrupt CPUs */
+	if (cpu >= nr_cpu_ids)
 		cpu = cpumask_first(available_cpus);
 
 	if (cpu >= nr_cpu_ids) { /* empty */
@@ -1080,8 +1081,7 @@ int hfi1_get_proc_affinity(int node)
 		 * loop as the used mask gets reset when
 		 * (set->mask == set->used) before this loop.
 		 */
-		cpumask_andnot(diff, hw_thread_mask, &set->used);
-		if (!cpumask_empty(diff))
+		if (cpumask_andnot(diff, hw_thread_mask, &set->used))
 			break;
 	}
 	hfi1_cdbg(PROC, "Same available HW thread on all physical CPUs: %*pbl",
@@ -1113,8 +1113,7 @@ int hfi1_get_proc_affinity(int node)
 	 *    used for process assignments using the same method as
 	 *    the preferred NUMA node.
 	 */
-	cpumask_andnot(diff, available_mask, intrs_mask);
-	if (!cpumask_empty(diff))
+	if (cpumask_andnot(diff, available_mask, intrs_mask))
 		cpumask_copy(available_mask, diff);
 
 	/* If we don't have CPUs on the preferred node, use other NUMA nodes */
@@ -1130,8 +1129,7 @@ int hfi1_get_proc_affinity(int node)
 		 * At first, we don't want to place processes on the same
 		 * CPUs as interrupt handlers.
 		 */
-		cpumask_andnot(diff, available_mask, intrs_mask);
-		if (!cpumask_empty(diff))
+		if (cpumask_andnot(diff, available_mask, intrs_mask))
 			cpumask_copy(available_mask, diff);
 	}
 	hfi1_cdbg(PROC, "Possible CPUs for process: %*pbl",
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ