[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190129172059.GC17132@localhost.localdomain>
Date: Tue, 29 Jan 2019 10:20:59 -0700
From: Keith Busch <keith.busch@...el.com>
To: John Garry <john.garry@...wei.com>
Cc: "tglx@...utronix.de" <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>,
Marc Zyngier <marc.zyngier@....com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Linuxarm <linuxarm@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hannes Reinecke <hare@...e.com>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs
On Tue, Jan 29, 2019 at 05:12:40PM +0000, John Garry wrote:
> On 29/01/2019 15:44, Keith Busch wrote:
> >
> > Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
> > which would reap all outstanding commands before the CPU and IRQ are
> > taken offline. That was removed with commit 4b855ad37194f ("blk-mq:
> > Create hctx for each present CPU"). It sounds like we should bring
> > something like that back, but make more fine grain to the per-cpu context.
> >
>
> Seems reasonable. But we would need it to deal with drivers where they only
> expose a single queue to BLK MQ, but use many queues internally. I think
> megaraid sas does this, for example.
>
> I would also be slightly concerned with commands being issued from the
> driver unknown to blk mq, like SCSI TMF.
I don't think either of those descriptions sound like good candidates
for using managed IRQ affinities.
Powered by blists - more mailing lists