[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d637847f-52c8-1b55-80e2-1b4d4523fa8a@gmail.com>
Date: Thu, 15 Dec 2016 23:34:21 +1100
From: Balbir Singh <bsingharora@...il.com>
To: "Guilherme G. Piccoli" <gpiccoli@...ux.vnet.ibm.com>,
tglx@...utronix.de, linux-kernel@...r.kernel.org
Cc: linux-pci@...r.kernel.org, hch@....de,
linuxppc-dev@...ts.ozlabs.org, gabriel@...sman.be
Subject: Re: [PATCH] genirq/affinity: fix node generation from cpumask
On 15/12/16 05:01, Guilherme G. Piccoli wrote:
> Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading
> infrastructure") introduced a better IRQ spreading mechanism, taking
> account of the available NUMA nodes in the machine.
>
> Problem is that the algorithm of retrieving the nodemask iterates
> "linearly" based on the number of online nodes - some architectures
> present non-linear node distribution among the nodemask, like PowerPC.
> If this is the case, the algorithm lead to a wrong node count number
> and therefore to a bad/incomplete IRQ affinity distribution.
>
> For example, this problem were found in a machine with 128 CPUs and two
> nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly
> distributed). This led to a wrong affinity distribution which then led to
> a bad mq allocation for nvme driver.
>
> Finally, we take the opportunity to fix a comment regarding the affinity
> distribution when we have _more_ nodes than vectors.
Very good catch!
Acked-by: Balbir Singh <bsingharora@...il.com>
Powered by blists - more mailing lists