lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Dec 2016 10:24:25 +1100
From:   Gavin Shan <gwshan@...ux.vnet.ibm.com>
To:     "Guilherme G. Piccoli" <gpiccoli@...ux.vnet.ibm.com>
Cc:     tglx@...utronix.de, linux-kernel@...r.kernel.org,
        linux-pci@...r.kernel.org, hch@....de,
        linuxppc-dev@...ts.ozlabs.org, stable@...r.kernel.org.#.v4.9+,
        gabriel@...sman.be
Subject: Re: [PATCH] genirq/affinity: fix node generation from cpumask

On Wed, Dec 14, 2016 at 04:01:12PM -0200, Guilherme G. Piccoli wrote:
>Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading
>infrastructure") introduced a better IRQ spreading mechanism, taking
>account of the available NUMA nodes in the machine.
>
>Problem is that the algorithm of retrieving the nodemask iterates
>"linearly" based on the number of online nodes - some architectures
>present non-linear node distribution among the nodemask, like PowerPC.
>If this is the case, the algorithm lead to a wrong node count number
>and therefore to a bad/incomplete IRQ affinity distribution.
>
>For example, this problem were found in a machine with 128 CPUs and two
>nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly
>distributed). This led to a wrong affinity distribution which then led to
>a bad mq allocation for nvme driver.
>
>Finally, we take the opportunity to fix a comment regarding the affinity
>distribution when we have _more_ nodes than vectors.
>
>Fixes: 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure")
>Reported-by: Gabriel Krisman Bertazi <gabriel@...sman.be>
>Signed-off-by: Guilherme G. Piccoli <gpiccoli@...ux.vnet.ibm.com>
>Cc: stable@...r.kernel.org # v4.9+
>Cc: Christoph Hellwig <hch@....de>
>Cc: linuxppc-dev@...ts.ozlabs.org
>Cc: linux-pci@...r.kernel.org
>---

Reviewed-by: Gavin Shan <gwshan@...ux.vnet.ibm.com>

There is one picky comment as below, but you don't have to fix it :)

> kernel/irq/affinity.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
>diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
>index 9be9bda..464eaf0 100644
>--- a/kernel/irq/affinity.c
>+++ b/kernel/irq/affinity.c
>@@ -37,15 +37,15 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
>
> static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
> {
>-	int n, nodes;
>+	int n, nodes = 0;
>
> 	/* Calculate the number of nodes in the supplied affinity mask */
>-	for (n = 0, nodes = 0; n < num_online_nodes(); n++) {
>+	for_each_online_node(n)
> 		if (cpumask_intersects(mask, cpumask_of_node(n))) {
> 			node_set(n, *nodemsk);
> 			nodes++;
> 		}
>-	}
>+

It'd better to keep the brackets so that we needn't add them when adding
more code into the block next time.

> 	return nodes;
> }
>
>@@ -82,7 +82,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> 	nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
>
> 	/*
>-	 * If the number of nodes in the mask is less than or equal the
>+	 * If the number of nodes in the mask is greater than or equal the
> 	 * number of vectors we just spread the vectors across the nodes.
> 	 */
> 	if (affv <= nodes) {

Thanks,
Gavin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ