lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 12 May 2024 08:35:55 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Keith Busch <kbusch@...nel.org>, Ming Lei <ming.lei@...hat.com>
Cc: Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...a.com>,
 linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] nvme-pci: allow unmanaged interrupts

On Fri, May 10 2024 at 18:41, Keith Busch wrote:
> On Sat, May 11, 2024 at 07:50:21AM +0800, Ming Lei wrote:
>> Can you explain a bit why it is a no-op? If only isolated CPUs are
>> spread on one queue, there will be no IO originated from these isolated
>> CPUs, that is exactly what the isolation needs.
>
> The "isolcpus=managed_irq," option doesn't limit the dispatching CPUs.
> It only limits where the managed irq will assign the effective_cpus as a
> best effort.
>
> Example, I boot with a system with 4 threads, one nvme device, and
> kernel parameter:
>
>   isolcpus=managed_irq,2-3
>
> Run this:
>
>   for i in $(seq 0 3); do taskset -c $i dd if=/dev/nvme0n1 of=/dev/null bs=4k count=1000 iflag=direct; done
>
> Check /proc/interrupts | grep nvme0:
>
>            CPU0       CPU1       CPU2       CPU3
> ..
>  26:       1000          0          0          0  PCI-MSIX-0000:00:05.0   1-edge      nvme0q1
>  27:          0       1004          0          0  PCI-MSIX-0000:00:05.0   2-edge      nvme0q2
>  28:          0          0       1000          0  PCI-MSIX-0000:00:05.0   3-edge      nvme0q3
>  29:          0          0          0       1043  PCI-MSIX-0000:00:05.0   4-edge      nvme0q4
>
> The isolcpus did nothing becuase the each vector's mask had just one
> cpu; there was no where else that the managed irq could send it. The
> documentation seems to indicate that was by design as a "best effort".

That's expected as you pin the I/O operation on the isolated CPUs which
in turn makes them use the per CPU queue.

The isolated CPUs are only excluded for device management interrupts,
but not for the affinity spread of the queues.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ