[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <530B47FD.80608@imgtec.com>
Date: Mon, 24 Feb 2014 13:24:13 +0000
From: James Hogan <james.hogan@...tec.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
"Peter Zijlstra" <peterz@...radead.org>,
metag <linux-metag@...r.kernel.org>
Subject: Re: [patch 06/26] metag: Use irq_set_affinity instead of homebrewn
code
Hi Thomas,
On 23/02/14 21:40, Thomas Gleixner wrote:
> There is no point in having an incomplete copy of irq_set_affinity()
> for the hotplug irq migration code.
That sounds reasonable, but when I gave it a try I started getting
warnings on the log after offlining one cpu then the other:
META213-Thread0 DSP [LogF] IRQ13 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ14 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ15 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ18 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ29 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ30 no longer affine to CPU1
META213-Thread0 DSP [LogF] IRQ31 no longer affine to CPU1
It appears that the irq affinities weren't getting modified previously,
whereas now irq_do_set_affinity() does do cpumask_copy(data->affinity,
mask). Once all CPUs have been offlined at least once you get those
spurious messages even though the IRQ affinities haven't been explicitly
limited by anything.
I wonder if the stored affinity should really be altered in this case?
Cheers
James
>
> Use the core function instead and while at it switch to
> for_each_active_irq()
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> Cc: James Hogan <james.hogan@...tec.com>
> Cc: metag <linux-metag@...r.kernel.org>
> ---
> arch/metag/kernel/irq.c | 20 +++-----------------
> 1 file changed, 3 insertions(+), 17 deletions(-)
>
> Index: tip/arch/metag/kernel/irq.c
> ===================================================================
> --- tip.orig/arch/metag/kernel/irq.c
> +++ tip/arch/metag/kernel/irq.c
> @@ -261,18 +261,6 @@ int __init arch_probe_nr_irqs(void)
> }
>
> #ifdef CONFIG_HOTPLUG_CPU
> -static void route_irq(struct irq_data *data, unsigned int irq, unsigned int cpu)
> -{
> - struct irq_desc *desc = irq_to_desc(irq);
> - struct irq_chip *chip = irq_data_get_irq_chip(data);
> - unsigned long flags;
> -
> - raw_spin_lock_irqsave(&desc->lock, flags);
> - if (chip->irq_set_affinity)
> - chip->irq_set_affinity(data, cpumask_of(cpu), false);
> - raw_spin_unlock_irqrestore(&desc->lock, flags);
> -}
> -
> /*
> * The CPU has been marked offline. Migrate IRQs off this CPU. If
> * the affinity settings do not allow other CPUs, force them onto any
> @@ -281,10 +269,9 @@ static void route_irq(struct irq_data *d
> void migrate_irqs(void)
> {
> unsigned int i, cpu = smp_processor_id();
> - struct irq_desc *desc;
>
> - for_each_irq_desc(i, desc) {
> - struct irq_data *data = irq_desc_get_irq_data(desc);
> + for_each_active_irq(i) {
> + struct irq_data *data = irq_get_irq_data(i);
> unsigned int newcpu;
>
> if (irqd_is_per_cpu(data))
> @@ -303,8 +290,7 @@ void migrate_irqs(void)
> newcpu = cpumask_any_and(data->affinity,
> cpu_online_mask);
> }
> -
> - route_irq(data, i, newcpu);
> + irq_set_affinity(i, cpumask_of(newcpu));
> }
> }
> #endif /* CONFIG_HOTPLUG_CPU */
>
>
Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)
Powered by blists - more mailing lists