[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10908f20-7e18-e967-76dd-1a38e216b378@linux.ibm.com>
Date: Thu, 12 Nov 2020 16:45:35 +0100
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Keith Busch <kbusch@...nel.org>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
Jens Axboe <axboe@...com>, Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH 0/2] nvme-pic: improve max I/O queue handling
On 11/12/20 3:53 PM, Keith Busch wrote:
> On Thu, Nov 12, 2020 at 09:23:00AM +0100, Niklas Schnelle wrote:
>> while searching for a bug around zPCI + NVMe IRQ handling on a distro
>> kernel, I got confused around handling of the maximum number
>> of I/O queues in the NVMe driver.
>> I think I groked it in the end but would like to propose the following
>> improvements, that said I'm quite new to this code.
>> I tested both patches on s390x (with a debug config) and x86_64 so
>> with both data center and consumer NVMes.
>> For the second patch, since I don't own a device with the quirk, I tried
>> always returning 1 from nvme_max_io_queues() and confirmed that on my
>> Evo 970 Pro this resulted in about half the performance in a fio test
>> but did not otherwise break things. I couldn't find a reason why
>> allocating only the I/O queues we actually use would be problematic in
>> the code either but I might have missed something of course.
>
> I don't think you missed anything, and the series looks like a
> reasonable cleanup. I suspect the code was left over from a time when we
> didn't allocate the possible queues up-front.
>
> Reviewed-by: Keith Busch <kbusch@...nel.org>
>
You got to get something wrong, I hope in this case it's just the subject
of the cover letter :D
Thanks for the review, I appreciate it. Might be getting ahead of
myself but I'm curious who would take this change through their
tree if accepted?
Powered by blists - more mailing lists