lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250604193947.11834-6-yury.norov@gmail.com>
Date: Wed,  4 Jun 2025 15:39:41 -0400
From: Yury Norov <yury.norov@...il.com>
To: Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
	Jason Gunthorpe <jgg@...pe.ca>,
	Leon Romanovsky <leon@...nel.org>,
	Yury Norov <yury.norov@...il.com>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	linux-rdma@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH 5/7] RDMA: hfi1: use rounddown in find_hw_thread_mask()

From: "Yury Norov [NVIDIA]" <yury.norov@...il.com>

num_cores_per_socket is calculated by dividing by
node_affinity.num_online_nodes, but all users of this variable multiply
it by node_affinity.num_online_nodes again. This effectively is the same
as rounding it down by node_affinity.num_online_nodes.

Signed-off-by: Yury Norov [NVIDIA] <yury.norov@...il.com>
---
 drivers/infiniband/hw/hfi1/affinity.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index b2884226827a..7fa894c23fea 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -955,27 +955,22 @@ static void find_hw_thread_mask(uint hw_thread_no, cpumask_var_t hw_thread_mask,
 				struct hfi1_affinity_node_list *affinity)
 {
 	int curr_cpu;
-	uint num_cores_per_socket;
+	uint num_cores;
 
 	cpumask_copy(hw_thread_mask, &affinity->proc.mask);
 
 	if (affinity->num_core_siblings == 0)
 		return;
 
-	num_cores_per_socket = node_affinity.num_online_cpus /
-					affinity->num_core_siblings /
-						node_affinity.num_online_nodes;
+	num_cores = rounddown(node_affinity.num_online_cpus / affinity->num_core_siblings,
+				node_affinity.num_online_nodes);
 
 	/* Removing other siblings not needed for now */
-	curr_cpu = cpumask_cpumask_nth(num_cores_per_socket *
-			node_affinity.num_online_nodes, hw_thread_mask) + 1;
+	curr_cpu = cpumask_nth(num_cores * node_affinity.num_online_nodes, hw_thread_mask) + 1;
 	cpumask_clear_cpus(hw_thread_mask, curr_cpu, nr_cpu_ids - curr_cpu);
 
 	/* Identifying correct HW threads within physical cores */
-	cpumask_shift_left(hw_thread_mask, hw_thread_mask,
-			   num_cores_per_socket *
-			   node_affinity.num_online_nodes *
-			   hw_thread_no);
+	cpumask_shift_left(hw_thread_mask, hw_thread_mask, num_cores * hw_thread_no);
 }
 
 int hfi1_get_proc_affinity(int node)
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ