lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Nov 2020 09:23:00 +0100
From:   Niklas Schnelle <schnelle@...ux.ibm.com>
To:     linux-nvme@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, Keith Busch <kbusch@...nel.org>,
        Jens Axboe <axboe@...com>, Christoph Hellwig <hch@....de>,
        Sagi Grimberg <sagi@...mberg.me>
Subject: [PATCH 0/2] nvme-pic: improve max I/O queue handling

Hi,

while searching for a bug around zPCI + NVMe IRQ handling on a distro
kernel, I got confused around handling of the maximum number
of I/O queues in the NVMe driver.
I think I groked it in the end but would like to propose the following
improvements, that said I'm quite new to this code.
I tested both patches on s390x (with a debug config) and x86_64 so
with both data center and consumer NVMes.
For the second patch, since I don't own a device with the quirk, I tried
always returning 1 from nvme_max_io_queues() and confirmed that on my
Evo 970 Pro this resulted in about half the performance in a fio test
but did not otherwise break things. I couldn't find a reason why
allocating only the I/O queues we actually use would be problematic in
the code either but I might have missed something of course.

Best regards,
Niklas Schnelle

Niklas Schnelle (2):
  nvme-pci: drop min() from nr_io_queues assignment
  nvme-pci: don't allocate unused I/O queues

 drivers/nvme/host/pci.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists