[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14fd128d-7155-ab13-492f-952f072808d5@opengridcomputing.com>
Date: Mon, 10 Apr 2017 13:05:50 -0500
From: Steve Wise <swise@...ngridcomputing.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-rdma@...r.kernel.org,
linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org
Cc: netdev@...r.kernel.org, Saeed Mahameed <saeedm@...lanox.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH rfc 0/6] Automatic affinity settings for nvme over rdma
On 4/2/2017 8:41 AM, Sagi Grimberg wrote:
> This patch set is aiming to automatically find the optimal
> queue <-> irq multi-queue assignments in storage ULPs (demonstrated
> on nvme-rdma) based on the underlying rdma device irq affinity
> settings.
>
> First two patches modify mlx5 core driver to use generic API
> to allocate array of irq vectors with automatic affinity
> settings instead of open-coding exactly what it does (and
> slightly worse).
>
> Then, in order to obtain an affinity map for a given completion
> vector, we expose a new RDMA core API, and implement it in mlx5.
>
> The third part is addition of a rdma-based queue mapping helper
> to blk-mq that maps the tagset hctx's according to the device
> affinity mappings.
>
> I'd happily convert some more drivers, but I'll need volunteers
> to test as I don't have access to any other devices.
I'll test cxgb4 if you convert it. :)
Powered by blists - more mailing lists