[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e32726b5-fbe5-178c-719e-8a71517977b0@opengridcomputing.com>
Date: Tue, 24 Jul 2018 10:24:03 -0500
From: Steve Wise <swise@...ngridcomputing.com>
To: Max Gurtovoy <maxg@...lanox.com>,
'Sagi Grimberg' <sagi@...mberg.me>,
'Leon Romanovsky' <leon@...nel.org>
Cc: 'Doug Ledford' <dledford@...hat.com>,
'Jason Gunthorpe' <jgg@...lanox.com>,
'RDMA mailing list' <linux-rdma@...r.kernel.org>,
'Saeed Mahameed' <saeedm@...lanox.com>,
'linux-netdev' <netdev@...r.kernel.org>
Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask
On 7/19/2018 8:25 PM, Max Gurtovoy wrote:
>
>>>> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
>>>
>>> queue 9 is not mapped (overlap).
>>> please try the bellow:
>>>
>>
>> This seems to work. Here are three mapping cases: each vector on its
>> own cpu, each vector on 1 cpu within the local numa node, and each
>> vector having all cpus in its numa node. The 2nd mapping looks kinda
>> funny, but I think it achieved what you wanted? And all the cases
>> resulted in successful connections.
>>
>
> Thanks for testing this.
> I slightly improved the setting of the left CPUs and actually used
> Sagi's initial proposal.
>
> Sagi,
> please review the attached patch and let me know if I should add your
> signature on it.
> I'll run some perf test early next week on it (meanwhile I run
> login/logout with different num_queues successfully and irq settings).
>
> Steve,
> It will be great if you can apply the attached in your system and send
> your findings.
Sorry, I got side tracked. I'll try and test this today and report back.
Steve.
Powered by blists - more mailing lists