[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a84777c9-9247-794e-b593-109a858d3b76@huawei.com>
Date: Mon, 11 Jan 2021 12:05:59 +0000
From: John Garry <john.garry@...wei.com>
To: Jinpu Wang <jinpu.wang@...ud.ionos.com>,
Viswas G <Viswas.G@...rochip.com>
CC: "James E.J. Bottomley" <jejb@...ux.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
<akshatzen@...gle.com>, <Ruksar.devadi@...rochip.com>,
Radha Ramachandran <radha@...gle.com>, <bjashnani@...gle.com>,
<vishakhavc@...gle.com>, <Ashokkumar.N@...rochip.com>,
Linux SCSI Mailinglist <linux-scsi@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
Hannes Reinecke <hare@...e.de>,
Kashyap Desai <kashyap.desai@...adcom.com>,
<ming.lei@...hat.com>
Subject: Re: [RFC/RFT PATCH] scsi: pm8001: Expose HW queues for pm80xx hw
On 11/01/2021 11:57, Jinpu Wang wrote:
> Hi John,
>
>
> On Tue, Jan 5, 2021 at 12:21 PM John Garry <john.garry@...wei.com> wrote:
>>
>> In commit 05c6c029a44d ("scsi: pm80xx: Increase number of supported
>> queues"), support for 80xx chip was improved by enabling multiple HW
>> queues.
>>
>> In this, like other SCSI MQ HBA drivers, the HW queues were not exposed
>> to upper layer, and instead the driver managed the queues internally.
>>
>> However, this management duplicates blk-mq code. In addition, the HW queue
>> management is sub-optimal for a system where the number of CPUs exceeds
>> the HW queues - this is because queues are selected in a round-robin
>> fashion, when it would be better to make adjacent CPUs submit on the same
>> queue. And finally, the affinity of the completion queue interrupts is not
>> set to mirror the cpu<->HQ queue mapping, which is suboptimal.
>>
>> As such, for when MSIX is supported, expose HW queues to upper layer. Flag
>> PCI_IRQ_AFFINITY is set for allocating the MSIX vectors to automatically
>> assign affinity for the completion queue interrupts.
>>
>> Signed-off-by: John Garry <john.garry@...wei.com>
>>
>> ---
>> I sent as an RFC/RFT as I have no HW to test. In addition, since HW queue
>> #0 is used always for internal commands (like in send_task_abort()), if
>> all CPUs associated with HW queue #0 are offlined, the interrupt for that
>> queue will be shutdown, and no CPUs would be available to service any
>> internal commands completion. To solve that, we need [0] merged first and
>> switch over to use the new API. But we can still test performance in the
>> meantime.
>>
>> I assume someone else is making the change to use the request tag for IO
>> tag management.
>>
>> [0] https://lore.kernel.org/linux-scsi/47ba045e-a490-198b-1744-529f97192d3b@suse.de/
> Thanks for the patch, maybe Viswas can help to test?
>
That's what I am hoping for :)
Thanks!
>
Powered by blists - more mailing lists