[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1402241432360.21251@ionos.tec.linutronix.de>
Date: Mon, 24 Feb 2014 15:24:55 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: James Hogan <james.hogan@...tec.com>
cc: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
metag <linux-metag@...r.kernel.org>
Subject: Re: [patch 06/26] metag: Use irq_set_affinity instead of homebrewn
code
On Mon, 24 Feb 2014, James Hogan wrote:
> Hi Thomas,
>
> On 23/02/14 21:40, Thomas Gleixner wrote:
> > There is no point in having an incomplete copy of irq_set_affinity()
> > for the hotplug irq migration code.
>
> That sounds reasonable, but when I gave it a try I started getting
> warnings on the log after offlining one cpu then the other:
>
> META213-Thread0 DSP [LogF] IRQ13 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ14 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ15 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ18 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ29 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ30 no longer affine to CPU1
> META213-Thread0 DSP [LogF] IRQ31 no longer affine to CPU1
>
> It appears that the irq affinities weren't getting modified previously,
> whereas now irq_do_set_affinity() does do cpumask_copy(data->affinity,
> mask). Once all CPUs have been offlined at least once you get those
> spurious messages even though the IRQ affinities haven't been explicitly
> limited by anything.
>
> I wonder if the stored affinity should really be altered in this case?
Delta patch below.
You need that irq-metag part in any case as the user space interface
does not filter out stuff. Assume you offlined core 1 and user changes
affinity from 0xf to 0xe. So your selector will pick core 1 which is
offline....
Btw, our handling of this is a bit awkward. Right now we let the user
do
echo 0xf > /proc/irq/$N/smp_affinity
and reading that back will give you 0xf though the kernel just selects
a subset or even a single target cpu and one has to analyze
/proc/interrupts to find out which one.
We should add some mechanism to tell the user what's really going on.
But that's a separate issue.
Thanks,
tglx
Index: tip/arch/metag/kernel/irq.c
===================================================================
--- tip.orig/arch/metag/kernel/irq.c
+++ tip/arch/metag/kernel/irq.c
@@ -287,10 +287,8 @@ void migrate_irqs(void)
i, cpu);
cpumask_setall(data->affinity);
- newcpu = cpumask_any_and(data->affinity,
- cpu_online_mask);
}
- irq_set_affinity(i, cpumask_of(newcpu));
+ irq_set_affinity(i, data->affinity);
}
}
#endif /* CONFIG_HOTPLUG_CPU */
Index: tip/drivers/irqchip/irq-metag.c
===================================================================
--- tip.orig/drivers/irqchip/irq-metag.c
+++ tip/drivers/irqchip/irq-metag.c
@@ -201,7 +201,7 @@ static int metag_internal_irq_set_affini
* one cpu (the interrupt code doesn't support it), so we just
* pick the first cpu we find in 'cpumask'.
*/
- cpu = cpumask_any(cpumask);
+ cpu = cpumask_any_and(cpumask, cpu_online_mask);
thread = cpu_2_hwthread_id[cpu];
metag_out32(TBI_TRIG_VEC(TBID_SIGNUM_TR1(thread)),
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists