[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <71598d89-b29c-4b21-83ee-49fe9b890043@flourine.local>
Date: Fri, 5 Sep 2025 09:41:42 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Hannes Reinecke <hare@...e.de>
Cc: Daniel Wagner <wagi@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
"Michael S. Tsirkin" <mst@...hat.com>, Aaron Tomlin <atomlin@...mlin.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>, Thomas Gleixner <tglx@...utronix.de>,
Costa Shulyupin <costa.shul@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Valentin Schneider <vschneid@...hat.com>, Waiman Long <llong@...hat.com>, Ming Lei <ming.lei@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>, Mel Gorman <mgorman@...e.de>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-nvme@...ts.infradead.org, megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org,
storagedev@...rochip.com, virtualization@...ts.linux.dev,
GR-QLogic-Storage-Upstream@...vell.com
Subject: Re: [PATCH v7 05/10] scsi: Use block layer helpers to constrain
queue affinity
On Thu, Jul 03, 2025 at 08:43:01AM +0200, Hannes Reinecke wrote:
> > drivers/scsi/fnic/fnic_isr.c | 7 +++++--
> > drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 1 +
> > drivers/scsi/megaraid/megaraid_sas_base.c | 5 ++++-
> > drivers/scsi/mpi3mr/mpi3mr_fw.c | 6 +++++-
> > drivers/scsi/mpt3sas/mpt3sas_base.c | 5 ++++-
> > drivers/scsi/pm8001/pm8001_init.c | 1 +
> > drivers/scsi/qla2xxx/qla_isr.c | 1 +
> > drivers/scsi/smartpqi/smartpqi_init.c | 7 +++++--
> > 8 files changed, 26 insertions(+), 7 deletions(-)
> >
>
> All of these drivers are not aware of CPU hotplug, and as such
> will not be notified when the number of CPUs changes.
> But you use 'blk_mq_online_queue_affinity()' for all of these
> drivers.
> Wouldn't 'blk_mq_possible_queue_affinit()' a better choice here
> to insulate against CPU hotplug effects?
>
> Also some drivers which are using irq affinity (eg aacraid, lpfc) are
> missing from these conversions. Why?
I've updated both drivers to use pci_alloc_irq_vectors_affinity with the
PCI_IRQ_AFFINITY flag. But then I saw this:
dafeaf2c03e7 ("scsi: aacraid: Stop using PCI_IRQ_AFFINITY")
So we need be careful here.
In the case of lpfc (and qla2xxx), the nvme-fabrics core needs to be
updated too (gets out ouf sync with the number of queues allocated). I
already have patches for this. But I'd say we first continue with this
series before the next set of patches.
Thus I decided to drop all the driver updates which are currently not
using pci_alloc_irq_vectors_affinity and PCI_IRQ_AFFINITY. These are
supporting managed irqs thus these should be all ready for this feature.
For the rest of the drivers, I'd rather update one by one so we don't
introduce regressions (e.g. aacraid)
Powered by blists - more mailing lists