[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4a04c13-b7db-84df-b1d3-90022905c084@huawei.com>
Date: Tue, 5 Feb 2019 15:09:28 +0000
From: John Garry <john.garry@...wei.com>
To: Keith Busch <keith.busch@...el.com>
CC: Hannes Reinecke <hare@...e.de>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Marc Zyngier <marc.zyngier@....com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Linuxarm <linuxarm@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Hannes Reinecke" <hare@...e.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs
On 05/02/2019 14:52, Keith Busch wrote:
> On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
>> On 04/02/2019 07:12, Hannes Reinecke wrote:
>>
>> Hi Hannes,
>>
>>>
>>> So, as the user then has to wait for the system to declars 'ready for
>>> CPU remove', why can't we just disable the SQ and wait for all I/O to
>>> complete?
>>> We can make it more fine-grained by just waiting on all outstanding I/O
>>> on that SQ to complete, but waiting for all I/O should be good as an
>>> initial try.
>>> With that we wouldn't need to fiddle with driver internals, and could
>>> make it pretty generic.
>>
>> I don't fully understand this idea - specifically, at which layer would
>> we be waiting for all the IO to complete?
>
> Whichever layer dispatched the IO to a CPU specific context should
> be the one to wait for its completion. That should be blk-mq for most
> block drivers.
For SCSI devices, unfortunately not all IO sent to the HW originates
from blk-mq or any other single entity.
Thanks,
John
>
> .
>
Powered by blists - more mailing lists