[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190128154341.47195-11-sashal@kernel.org>
Date: Mon, 28 Jan 2019 10:38:48 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Long Li <longli@...rosoft.com>,
Thomas Gleixner <tglx@...utronix.de>,
Michael Kelley <mikelley@...rosoft.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH AUTOSEL 4.20 011/304] genirq/affinity: Spread IRQs to all available NUMA nodes
From: Long Li <longli@...rosoft.com>
[ Upstream commit b82592199032bf7c778f861b936287e37ebc9f62 ]
If the number of NUMA nodes exceeds the number of MSI/MSI-X interrupts
which are allocated for a device, the interrupt affinity spreading code
fails to spread them across all nodes.
The reason is, that the spreading code starts from node 0 and continues up
to the number of interrupts requested for allocation. This leaves the nodes
past the last interrupt unused.
This results in interrupt concentration on the first nodes which violates
the assumption of the block layer that all nodes are covered evenly. As a
consequence the NUMA nodes above the number of interrupts are all assigned
to hardware queue 0 and therefore NUMA node 0, which results in bad
performance and has CPU hotplug implications, because queue 0 gets shut
down when the last CPU of node 0 is offlined.
Go over all NUMA nodes and assign them round-robin to all requested
interrupts to solve this.
[ tglx: Massaged changelog ]
Signed-off-by: Long Li <longli@...rosoft.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Ming Lei <ming.lei@...hat.com>
Cc: Michael Kelley <mikelley@...rosoft.com>
Link: https://lkml.kernel.org/r/20181102180248.13583-1-longli@linuxonhyperv.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/irq/affinity.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index f4f29b9d90ee..e12cdf637c71 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -117,12 +117,11 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
*/
if (numvecs <= nodes) {
for_each_node_mask(n, nodemsk) {
- cpumask_copy(masks + curvec, node_to_cpumask[n]);
- if (++done == numvecs)
- break;
+ cpumask_or(masks + curvec, masks + curvec, node_to_cpumask[n]);
if (++curvec == last_affv)
curvec = affd->pre_vectors;
}
+ done = numvecs;
goto out;
}
--
2.19.1
Powered by blists - more mailing lists