[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6f6e031-8b79-439d-c2af-8d3e76f30710@huawei.com>
Date: Tue, 29 Jan 2019 11:25:48 +0000
From: John Garry <john.garry@...wei.com>
To: <tglx@...utronix.de>, Christoph Hellwig <hch@....de>
CC: Marc Zyngier <marc.zyngier@....com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Keith Busch <keith.busch@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Linuxarm <linuxarm@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Hannes Reinecke" <hare@...e.com>
Subject: Question on handling managed IRQs when hotplugging CPUs
Hi,
I have a question on $subject which I hope you can shed some light on.
According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed
IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ
affinity mask, the IRQ is shutdown.
The reasoning is that this IRQ is thought to be associated with a
specific queue on a MQ device, and the CPUs in the IRQ affinity mask are
the same CPUs associated with the queue. So, if no CPU is using the
queue, then no need for the IRQ.
However how does this handle scenario of last CPU in IRQ affinity mask
being offlined while IO associated with queue is still in flight?
Or if we make the decision to use queue associated with the current CPU,
and then that CPU (being the last CPU online in the queue's IRQ
afffinity mask) goes offline and we finish the delivery with another CPU?
In these cases, when the IO completes, it would not be serviced and timeout.
I have actually tried this on my arm64 system and I see IO timeouts.
Thanks in advance,
John
Powered by blists - more mailing lists