lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Jan 2019 13:01:45 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Hannes Reinecke <hare@...e.com>
cc:     John Garry <john.garry@...wei.com>, Christoph Hellwig <hch@....de>,
        Marc Zyngier <marc.zyngier@....com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        Keith Busch <keith.busch@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Linuxarm <linuxarm@...wei.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        SCSI Mailing List <linux-scsi@...r.kernel.org>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs

On Tue, 29 Jan 2019, Hannes Reinecke wrote:
> That actually is a very good question, and I have been wondering about this
> for quite some time.
> 
> I find it a bit hard to envision a scenario where the IRQ affinity is
> automatically (and, more importantly, atomically!) re-routed to one of the
> other CPUs.
> And even it it were, chances are that there are checks in the driver
> _preventing_ them from handling those requests, seeing that they should have
> been handled by another CPU ...
> 
> I guess the safest bet is to implement a 'cleanup' worker queue which is
> responsible of looking through all the outstanding commands (on all hardware
> queues), and then complete those for which no corresponding CPU / irqhandler
> can be found.
> 
> But I defer to the higher authorities here; maybe I'm totally wrong and it's
> already been taken care of.

TBH, I don't know. I merily was involved in the genirq side of this. But
yes, in order to make this work correctly the basic contract for CPU
hotplug case must be:

If the last CPU which is associated to a queue (and the corresponding
interrupt) goes offline, then the subsytem/driver code has to make sure
that:

   1) No more requests can be queued on that queue

   2) All outstanding of that queue have been completed or redirected
      (don't know if that's possible at all) to some other queue.

That has to be done in that order obviously. Whether any of the
subsystems/drivers actually implements this, I can't tell.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ