[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87oa0erogg.fsf@collabora.co.uk>
Date: Wed, 14 Dec 2016 23:05:51 -0200
From: Gabriel Krisman Bertazi <gabriel@...sman.be>
To: "Guilherme G. Piccoli" <gpiccoli@...ux.vnet.ibm.com>
Cc: tglx@...utronix.de, linux-kernel@...r.kernel.org,
gabriel@...sman.be, hch@....de, linuxppc-dev@...ts.ozlabs.org,
linux-pci@...r.kernel.org
Subject: Re: [PATCH] genirq/affinity: fix node generation from cpumask
"Guilherme G. Piccoli" <gpiccoli@...ux.vnet.ibm.com> writes:
> Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading
> infrastructure") introduced a better IRQ spreading mechanism, taking
> account of the available NUMA nodes in the machine.
>
> Problem is that the algorithm of retrieving the nodemask iterates
> "linearly" based on the number of online nodes - some architectures
> present non-linear node distribution among the nodemask, like PowerPC.
> If this is the case, the algorithm lead to a wrong node count number
> and therefore to a bad/incomplete IRQ affinity distribution.
>
> For example, this problem were found in a machine with 128 CPUs and two
> nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly
> distributed). This led to a wrong affinity distribution which then led to
> a bad mq allocation for nvme driver.
>
> Finally, we take the opportunity to fix a comment regarding the affinity
> distribution when we have _more_ nodes than vectors.
Thanks for taking care of this so quickly, Guilherme.
Reviewed-by: Gabriel Krisman Bertazi <gabriel@...sman.be>
--
Gabriel Krisman Bertazi
Powered by blists - more mailing lists