[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <682ff953-9130-4920-a9f2-88dfd6718be1@oracle.com>
Date: Thu, 19 Jun 2025 11:46:44 +0100
From: John Garry <john.g.garry@...cle.com>
To: John Meneghini <jmeneghi@...hat.com>,
James.Bottomley@...senPartnership.com, martin.petersen@...cle.com,
linux-scsi@...r.kernel.org, aacraid@...rosemi.com, corbet@....net
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, thenzl@...hat.com,
Scott.Benesh@...rochip.com, Don.Brace@...rochip.com,
Tom.White@...rochip.com, Abhinav.Kuchibhotla@...rochip.com,
sagar.biradar@...rochip.com, mpatalan@...hat.com
Subject: Re: [PATCH v3] scsi: aacraid: Fix reply queue mapping to CPUs based
on IRQ affinity
On 18/06/2025 20:24, John Meneghini wrote:
> From: Sagar Biradar <sagar.biradar@...rochip.com>
>
> From: Sagar Biradar <sagar.biradar@...rochip.com>
>
> This patch fixes a bug in the original path that caused I/O hangs. The
> I/O hangs were because of an MSIx vector not having a mapped online CPU
> upon receiving completion.
>
> This patch enables Multi-Q support in the aacriad driver. Multi-Q support
> in the driver is needed to support CPU offlining.
I assume that you mean "safe" CPU offlining.
It seems to me that in all cases we use queue interrupt affinity
spreading and managed interrupts for MSIX, right?
See aac_define_int_mode() -> pci_alloc_irq_vectors(..., PCI_IRQ_MSIX |
PCI_IRQ_AFFINITY);
But then for this non- Multi-Q support, the queue seems to be chosen
based on a round-robin approach in the driver. That round-robin comes
from how fib.vector_no is assigned in aac_fib_vector_assign(). If this
is the case, then why are managed interrupts being used for this non-
Multi-Q support at all?
I may be wrong about this. That driver is hard to understand with so
many knobs.
Powered by blists - more mailing lists