[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0de6dae8-1234-3e3f-d8f3-2d8de47b7f9e@suse.com>
Date: Tue, 5 Feb 2019 16:10:47 +0100
From: Hannes Reinecke <hare@...e.com>
To: Keith Busch <keith.busch@...el.com>,
John Garry <john.garry@...wei.com>
Cc: Hannes Reinecke <hare@...e.de>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Marc Zyngier <marc.zyngier@....com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Linuxarm <linuxarm@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs
On 2/5/19 3:52 PM, Keith Busch wrote:
> On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
>> On 04/02/2019 07:12, Hannes Reinecke wrote:
>>
>> Hi Hannes,
>>
>>>
>>> So, as the user then has to wait for the system to declars 'ready for
>>> CPU remove', why can't we just disable the SQ and wait for all I/O to
>>> complete?
>>> We can make it more fine-grained by just waiting on all outstanding I/O
>>> on that SQ to complete, but waiting for all I/O should be good as an
>>> initial try.
>>> With that we wouldn't need to fiddle with driver internals, and could
>>> make it pretty generic.
>>
>> I don't fully understand this idea - specifically, at which layer would
>> we be waiting for all the IO to complete?
>
> Whichever layer dispatched the IO to a CPU specific context should
> be the one to wait for its completion. That should be blk-mq for most
> block drivers.
>
Indeed.
But we don't provide any mechanisms for that ATM, right?
Maybe this would be a topic fit for LSF/MM?
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
hare@...e.com +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
Powered by blists - more mailing lists