lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251020124646.2050459-1-ming.lei@redhat.com>
Date: Mon, 20 Oct 2025 20:46:46 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
	linux-block@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <axboe@...nel.dk>,
	Ming Lei <ming.lei@...hat.com>
Subject: [PATCH] lib/group_cpus: fix cross-NUMA CPU assignment in group_cpus_evenly

When numgrps > nodes, group_cpus_evenly() can incorrectly assign CPUs
from different NUMA nodes to the same group due to the wrapping logic.
Then poor block IO performance is caused because of remote IO completion.
And it can be avoided completely in case of `numgrps > nodes` because
each numa node may includes more CPUs than group's.

The issue occurs when curgrp reaches last_grp and wraps to 0. This causes
CPUs from later-processed nodes to be added to groups that already contain
CPUs from earlier-processed nodes, violating NUMA locality.

Example with 8 NUMA nodes, 16 groups:
- Each node gets 2 groups allocated
- After processing nodes, curgrp reaches 16
- Wrapping to 0 causes CPUs from node N to be added to group 0 which
  already has CPUs from node 0

Fix this by adding find_next_node_group() helper that searches for the
next group (starting from 0) that already contains CPUs from the same
NUMA node. When wrapping is needed, use this helper instead of blindly
wrapping to 0, ensuring CPUs are only added to groups within the same
NUMA node.

Signed-off-by: Ming Lei <ming.lei@...hat.com>
---
 lib/group_cpus.c | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/lib/group_cpus.c b/lib/group_cpus.c
index 6d08ac05f371..54d70271e2dd 100644
--- a/lib/group_cpus.c
+++ b/lib/group_cpus.c
@@ -246,6 +246,24 @@ static void alloc_nodes_groups(unsigned int numgrps,
 	}
 }
 
+/*
+ * Find next group in round-robin fashion that contains CPUs from the
+ * specified NUMA node. Used for wrapping to avoid cross-NUMA assignment.
+ */
+static unsigned int find_next_node_group(struct cpumask *masks,
+					 unsigned int numgrps,
+					 const struct cpumask *node_cpus)
+{
+	unsigned int i;
+
+	for (i = 0; i < numgrps; i++) {
+		if (cpumask_intersects(&masks[i], node_cpus))
+			return i;
+	}
+
+	return 0;
+}
+
 static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
 			       cpumask_var_t *node_to_cpumask,
 			       const struct cpumask *cpu_mask,
@@ -315,11 +333,15 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
 			}
 
 			/*
-			 * wrapping has to be considered given 'startgrp'
-			 * may start anywhere
+			 * Wrapping has to be considered given 'startgrp'
+			 * may start anywhere. When wrapping, find the next
+			 * group (in round-robin fashion) that already contains
+			 * CPUs from the same NUMA node to avoid mixing CPUs
+			 * from different NUMA nodes in the same group.
 			 */
 			if (curgrp >= last_grp)
-				curgrp = 0;
+				curgrp = find_next_node_group(masks, numgrps,
+							      node_to_cpumask[nv->id]);
 			grp_spread_init_one(&masks[curgrp], nmsk,
 						cpus_per_grp);
 		}
-- 
2.51.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ