[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111214183446.GB12703@mudshark.cambridge.arm.com>
Date: Wed, 14 Dec 2011 18:34:47 +0000
From: Will Deacon <will.deacon@....com>
To: tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org
Subject: IRQ migration on CPU offline path
Hi Thomas,
I've been looking at the IRQ migration code on x86 (fixup_irqs) for the CPU
hotplug path in order to try and fix a bug we have on ARM with the
desc->affinity mask not getting updated. Compared to irq_set_affinity, the code
is pretty whacky (I guess because it's called via stop_machine) so I wondered if
you could help me understand a few points:
(1) Affinity notifiers - we seem to ignore these and I guess they don't expect
to be called from this context. It could lead to the cpu_rmap stuff being
bogus, but that's not used for anything much. Do we just depend on people
having hotplug notifiers to deal with this?
(2) Threaded handlers - I can't see where we update irqaction->thread_flags with
IRQTF_AFFINITY for handlers that are migrated by the scheduler when a CPU
goes down. Is this required?
(3) On x86, we rely on the irq_chip updating the desc->affinity mask in
->irq_set_affinity. It seems like we could use IRQ_SET_MASK_OK{_NOCOPY} for
this and, in the case of the ioapic, return IRQ_SET_MASK_OK_NOCOPY (removing
a redundant copy from the usual affinity path).
Of course, I could just be completely confused, which is why I haven't started
hacking code just yet :)
Cheers and sorry for the barrage of questions!,
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists