[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160926150905.GA16811@lst.de>
Date: Mon, 26 Sep 2016 17:09:05 +0200
From: Christoph Hellwig <hch@....de>
To: Sagi Grimberg <sagi@...mberg.me>
Cc: Christoph Hellwig <hch@....de>, axboe@...com, tglx@...utronix.de,
agordeev@...hat.com, keith.busch@...el.com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/13] nvme: switch to use pci_alloc_irq_vectors
On Fri, Sep 23, 2016 at 03:21:14PM -0700, Sagi Grimberg wrote:
> Question: is using pci_alloc_irq_vectors() obligated for
> supplying blk-mq with the device affinity mask?
No, but it's very useful. We'll need equivalents for other busses
that provide multipl vectors and vector spreading.
> If I do this completely-untested [1] what will happen?
Everything will be crashing and burning because you call to_pci_dev on
something that's not a PCI dev?
For the next merge window I plan to wire up the affinity information
for the RDMA code, and I will add a counterpart to blk_mq_pci_map_queues
that spreads the queues over the completion vectors.
Powered by blists - more mailing lists