[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3fe63dab-0791-f476-69c4-9866b70e8520@huawei.com>
Date: Wed, 30 Jan 2019 10:38:18 +0000
From: John Garry <john.garry@...wei.com>
To: Keith Busch <keith.busch@...el.com>
CC: "tglx@...utronix.de" <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Marc Zyngier <marc.zyngier@....com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Linuxarm <linuxarm@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hannes Reinecke <hare@...e.com>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs
On 29/01/2019 17:20, Keith Busch wrote:
> On Tue, Jan 29, 2019 at 05:12:40PM +0000, John Garry wrote:
>> On 29/01/2019 15:44, Keith Busch wrote:
>>>
>>> Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
>>> which would reap all outstanding commands before the CPU and IRQ are
>>> taken offline. That was removed with commit 4b855ad37194f ("blk-mq:
>>> Create hctx for each present CPU"). It sounds like we should bring
>>> something like that back, but make more fine grain to the per-cpu context.
>>>
>>
>> Seems reasonable. But we would need it to deal with drivers where they only
>> expose a single queue to BLK MQ, but use many queues internally. I think
>> megaraid sas does this, for example.
>>
>> I would also be slightly concerned with commands being issued from the
>> driver unknown to blk mq, like SCSI TMF.
>
> I don't think either of those descriptions sound like good candidates
> for using managed IRQ affinities.
I wouldn't say that this behaviour is obvious to the developer. I can't
see anything in Documentation/PCI/MSI-HOWTO.txt
It also seems that this policy to rely on upper layer to flush+freeze
queues would cause issues if managed IRQs are used by drivers in other
subsystems. Networks controllers may have multiple queues and
unsoliciated interrupts.
Thanks,
John
>
> .
>
Powered by blists - more mailing lists