lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Mar 2019 14:27:23 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Peter Xu <peterx@...hat.com>
cc:     Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>,
        Jason Wang <jasowang@...hat.com>,
        Luiz Capitulino <lcapitulino@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Michael S. Tsirkin" <mst@...hat.com>, minlei@...hat.com
Subject: Re: Virtio-scsi multiqueue irq affinity

Peter,

On Mon, 25 Mar 2019, Peter Xu wrote:
> Now I understand it can be guaranteed so it should not break
> determinism of the real-time applications.  But again, I'm curious
> whether we can specify how to spread the hardware queues of a block
> controller (as I asked in my previous post) instead of the default one
> (which is to spread the queues upon all the cores)?  I'll try to give
> a detailed example on this one this time: Let's assume we've had a
> host with 2 nodes and 8 cores (Node 0 with CPUs 0-3, Node 1 with CPUs
> 4-7), and a SCSI controller with 4 queues.  We want to take the 2nd
> node to run the real-time applications so we do isolcpus=4-7.  By
> default, IIUC the hardware queues will be allocated like this:
> 
>   - queue 1: CPU 0,1
>   - queue 2: CPU 2,3
>   - queue 3: CPU 4,5
>   - queue 4: CPU 6,7
> 
> And the IRQs of the queues will be bound to the same cpuset that the
> queue is bound to.
> 
> So my previous question is: since we know that CPU 4-7 won't generate
> any IO after all (and they shouldn't), could it be possible that we
> configure the system somehow to reflect a mapping like below:
> 
>   - queue 1: CPU 0
>   - qeueu 2: CPU 1
>   - queue 3: CPU 2
>   - queue 4: CPU 3
> 
> Then we disallow the CPUs 4-7 to generate IO and return failure if
> they tries to.
> 
> Again, I'm pretty uncertain on whether this case can be anything close
> to useful...  It just came out of my pure curiosity.  I think it at
> least has some benefits like: we will guarantee that the realtime CPUs
> won't send block IO requests (which could be good because it could
> simply break real-time determinism), and we'll save two queues from
> being totally idle (so if we run non-real-time block applications on
> cores 0-3 we still gain 4 hardware queues's throughput rather than 2).

If that _IS_ useful, then the affinity spreading logic can be changed to
accomodate that. It's not really hard to do so, but we'd need a proper
usecase for justification.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ