[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1285629463-27699-2-git-send-email-nacc@us.ibm.com>
Date: Mon, 27 Sep 2010 16:17:42 -0700
From: Nishanth Aravamudan <nacc@...ibm.com>
To: nacc@...ibm.com
Cc: miltonm@....com, Thomas Gleixner <tglx@...utronix.de>,
Ian Campbell <ian.campbell@...rix.com>,
Peter Zijlstra <peterz@...radead.org>,
Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH 1/2] IRQ: use cpu_possible_mask rather than online_mask in setup_affinity
The use of online_mask requires architecture code to be hotplug-aware to
account for IRQ round-robin'ing. With user-driven dynamic SMT, this
could commonly occur even without physical hotplug. Without this change
and "pseries/xics: use cpu_possible_mask rather than cpu_all_mask", IRQs
are all routed to CPU0 on power machines not running irqbalance.
Signed-off-by: Nishanth Aravamudan <nacc@...ibm.com>
---
I have boot-tested this on ppc64, but not yet on x86/x86_64. This is
generic-code, and perhaps an audit of all .set_affinity functions should
occur before upstream acceptance?
kernel/irq/manage.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index c3003e9..ef85b95 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -175,7 +175,7 @@ static int setup_affinity(unsigned int irq, struct irq_desc *desc)
desc->status &= ~IRQ_AFFINITY_SET;
}
- cpumask_and(desc->affinity, cpu_online_mask, irq_default_affinity);
+ cpumask_and(desc->affinity, cpu_possible_mask, irq_default_affinity);
set_affinity:
desc->chip->set_affinity(irq, desc->affinity);
--
1.7.0.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists