lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1901301338170.5537@nanos.tec.linutronix.de>
Date:   Wed, 30 Jan 2019 13:43:32 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     John Garry <john.garry@...wei.com>
cc:     Keith Busch <keith.busch@...el.com>,
        Christoph Hellwig <hch@....de>,
        Marc Zyngier <marc.zyngier@....com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        Peter Zijlstra <peterz@...radead.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Linuxarm <linuxarm@...wei.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Hannes Reinecke <hare@...e.com>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs

On Wed, 30 Jan 2019, John Garry wrote:
> On 29/01/2019 17:20, Keith Busch wrote:
> > On Tue, Jan 29, 2019 at 05:12:40PM +0000, John Garry wrote:
> > > On 29/01/2019 15:44, Keith Busch wrote:
> > > > 
> > > > Hm, we used to freeze the queues with CPUHP_BLK_MQ_PREPARE callback,
> > > > which would reap all outstanding commands before the CPU and IRQ are
> > > > taken offline. That was removed with commit 4b855ad37194f ("blk-mq:
> > > > Create hctx for each present CPU"). It sounds like we should bring
> > > > something like that back, but make more fine grain to the per-cpu
> > > > context.
> > > > 
> > > 
> > > Seems reasonable. But we would need it to deal with drivers where they
> > > only
> > > expose a single queue to BLK MQ, but use many queues internally. I think
> > > megaraid sas does this, for example.
> > > 
> > > I would also be slightly concerned with commands being issued from the
> > > driver unknown to blk mq, like SCSI TMF.
> > 
> > I don't think either of those descriptions sound like good candidates
> > for using managed IRQ affinities.
> 
> I wouldn't say that this behaviour is obvious to the developer. I can't see
> anything in Documentation/PCI/MSI-HOWTO.txt
> 
> It also seems that this policy to rely on upper layer to flush+freeze queues
> would cause issues if managed IRQs are used by drivers in other subsystems.
> Networks controllers may have multiple queues and unsoliciated interrupts.

It's doesn't matter which part is managing flush/freeze of queues as long
as something (either common subsystem code, upper layers or the driver
itself) does it.

So for the megaraid SAS example the BLK MQ layer obviously can't do
anything because it only sees a single request queue. But the driver could,
if the the hardware supports it. tell the device to stop queueing
completions on the completion queue which is associated with a particular
CPU (or set of CPUs) during offline and then wait for the on flight stuff
to be finished. If the hardware does not allow that, then managed
interrupts can't work for it.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ