lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-c0af52437254fda8b0cdbaae5a9b6d9327f1fcd5@git.kernel.org>
Date:   Thu, 15 Dec 2016 03:37:13 -0800
From:   "tip-bot for Guilherme G. Piccoli" <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     gpiccoli@...ux.vnet.ibm.com, hch@....de, gabriel@...sman.be,
        hpa@...or.com, linux-kernel@...r.kernel.org,
        gwshan@...ux.vnet.ibm.com, tglx@...utronix.de, mingo@...nel.org
Subject: [tip:irq/urgent] genirq/affinity: Fix node generation from cpumask

Commit-ID:  c0af52437254fda8b0cdbaae5a9b6d9327f1fcd5
Gitweb:     http://git.kernel.org/tip/c0af52437254fda8b0cdbaae5a9b6d9327f1fcd5
Author:     Guilherme G. Piccoli <gpiccoli@...ux.vnet.ibm.com>
AuthorDate: Wed, 14 Dec 2016 16:01:12 -0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Thu, 15 Dec 2016 12:32:35 +0100

genirq/affinity: Fix node generation from cpumask

Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading
infrastructure") introduced a better IRQ spreading mechanism, taking
account of the available NUMA nodes in the machine.

Problem is that the algorithm of retrieving the nodemask iterates
"linearly" based on the number of online nodes - some architectures
present non-linear node distribution among the nodemask, like PowerPC.
If this is the case, the algorithm lead to a wrong node count number
and therefore to a bad/incomplete IRQ affinity distribution.

For example, this problem were found in a machine with 128 CPUs and two
nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly
distributed). This led to a wrong affinity distribution which then led to
a bad mq allocation for nvme driver.

Finally, we take the opportunity to fix a comment regarding the affinity
distribution when we have _more_ nodes than vectors.

Fixes: 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure")
Reported-by: Gabriel Krisman Bertazi <gabriel@...sman.be>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@...ux.vnet.ibm.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Reviewed-by: Gabriel Krisman Bertazi <gabriel@...sman.be>
Reviewed-by: Gavin Shan <gwshan@...ux.vnet.ibm.com>
Cc: linux-pci@...r.kernel.org
Cc: linuxppc-dev@...ts.ozlabs.org
Cc: hch@....de
Link: http://lkml.kernel.org/r/1481738472-2671-1-git-send-email-gpiccoli@linux.vnet.ibm.com
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>

---
 kernel/irq/affinity.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 9be9bda..4544b11 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -37,10 +37,10 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
 
 static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
 {
-	int n, nodes;
+	int n, nodes = 0;
 
 	/* Calculate the number of nodes in the supplied affinity mask */
-	for (n = 0, nodes = 0; n < num_online_nodes(); n++) {
+	for_each_online_node(n) {
 		if (cpumask_intersects(mask, cpumask_of_node(n))) {
 			node_set(n, *nodemsk);
 			nodes++;
@@ -82,7 +82,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
 
 	/*
-	 * If the number of nodes in the mask is less than or equal the
+	 * If the number of nodes in the mask is greater than or equal the
 	 * number of vectors we just spread the vectors across the nodes.
 	 */
 	if (affv <= nodes) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ