[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1cb63259-9fb6-59b0-3a34-0659973228ea@mellanox.com>
Date: Mon, 16 Jul 2018 17:54:13 +0300
From: Max Gurtovoy <maxg@...lanox.com>
To: Leon Romanovsky <leon@...nel.org>, Sagi Grimberg <sagi@...mberg.me>
CC: Doug Ledford <dledford@...hat.com>,
Jason Gunthorpe <jgg@...lanox.com>,
RDMA mailing list <linux-rdma@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
Steve Wise <swise@...ngridcomputing.com>,
linux-netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask
Hi,
I've tested this patch and seems problematic at this moment.
maybe this is because of the bug that Steve mentioned in the NVMe
mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA
initiator and I'll run his suggestion as well.
BTW, when I run the blk_mq_map_queues it works for every irq affinity.
On 7/16/2018 1:30 PM, Leon Romanovsky wrote:
> On Mon, Jul 16, 2018 at 01:23:24PM +0300, Sagi Grimberg wrote:
>> Leon, I'd like to see a tested-by tag for this (at least
>> until I get some time to test it).
>
> Of course.
>
> Thanks
>
>>
>> The patch itself looks fine to me.
-Max.
Powered by blists - more mailing lists