lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1903250948490.1798@nanos.tec.linutronix.de>
Date:   Mon, 25 Mar 2019 09:53:28 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Ming Lei <ming.lei@...hat.com>
cc:     Peter Xu <peterx@...hat.com>, Christoph Hellwig <hch@....de>,
        Jason Wang <jasowang@...hat.com>,
        Luiz Capitulino <lcapitulino@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Michael S. Tsirkin" <mst@...hat.com>, minlei@...hat.com
Subject: Re: Virtio-scsi multiqueue irq affinity

Ming,

On Mon, 25 Mar 2019, Ming Lei wrote:
> On Mon, Mar 25, 2019 at 01:02:13PM +0800, Peter Xu wrote:
> > One thing I can think of is the real-time scenario where "isolcpus="
> > is provided, then logically we should not allow any isolated CPUs to
> > be bound to any of the multi-queue IRQs.  Though Ming Lei and I had a
> 
> So far, this behaviour is made by user-space.
> 
> >From my understanding, IRQ subsystem doesn't handle "isolcpus=", even
> though the Kconfig help doesn't mention irq affinity affect:
> 
>           Make sure that CPUs running critical tasks are not disturbed by
>           any source of "noise" such as unbound workqueues, timers, kthreads...
>           Unbound jobs get offloaded to housekeeping CPUs. This is driven by
>           the "isolcpus=" boot parameter.

isolcpus has no effect on the interupts. That's what 'irqaffinity=' is for.

> Yeah, some RT application may exclude 'isolcpus=' from some IRQ's
> affinity via /proc/irq interface, and now it becomes not possible any
> more to do that for managed IRQ.
> 
> > discussion offlist before and Ming explained to me that as long as the
> > isolated CPUs do not generate any IO then there will be no IRQ on
> > those isolated (real-time) CPUs at all.  Can we guarantee that?  Now
> 
> It is only guaranteed for 1:1 mapping.
> 
> blk-mq uses managed IRQ's affinity to setup queue mapping, for example:
> 
> 1) single hardware queue
> - this queue's IRQ affinity includes all CPUs, then the hardware queue's
> IRQ is only fired on one specific CPU for IO submitted from any CPU

Right. We can special case that for single HW queue to honor the default
affinity setting. That's not hard to achieve.
 
> 2) multi hardware queue
> - there are N hardware queues
> - for each hardware queue i(i < N), its IRQ's affinity may include N(i) CPUs,
> then IRQ for this hardware queue i is fired on one specific CPU among N(i).

Correct and that's the sane case where it does not matter much, because if
your task on an isolated CPU does I/O then redirecting it through some
other CPU does not make sense. If it doesn't do I/O it wont be affected by
the dormant queue.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ