[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f936d176-c56b-143e-3311-f6df48f633dd@mellanox.com>
Date: Fri, 20 Jul 2018 04:25:32 +0300
From: Max Gurtovoy <maxg@...lanox.com>
To: Steve Wise <swise@...ngridcomputing.com>,
'Sagi Grimberg' <sagi@...mberg.me>,
'Leon Romanovsky' <leon@...nel.org>
CC: 'Doug Ledford' <dledford@...hat.com>,
'Jason Gunthorpe' <jgg@...lanox.com>,
'RDMA mailing list' <linux-rdma@...r.kernel.org>,
"'Saeed Mahameed'" <saeedm@...lanox.com>,
'linux-netdev' <netdev@...r.kernel.org>
Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask
>>> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
>>
>> queue 9 is not mapped (overlap).
>> please try the bellow:
>>
>
> This seems to work. Here are three mapping cases: each vector on its
> own cpu, each vector on 1 cpu within the local numa node, and each
> vector having all cpus in its numa node. The 2nd mapping looks kinda
> funny, but I think it achieved what you wanted? And all the cases
> resulted in successful connections.
>
Thanks for testing this.
I slightly improved the setting of the left CPUs and actually used
Sagi's initial proposal.
Sagi,
please review the attached patch and let me know if I should add your
signature on it.
I'll run some perf test early next week on it (meanwhile I run
login/logout with different num_queues successfully and irq settings).
Steve,
It will be great if you can apply the attached in your system and send
your findings.
Regards,
Max,
View attachment "0001-blk-mq-fix-RDMA-queue-cpu-mappings-assignments-for-m.patch" of type "text/plain" (4759 bytes)
Powered by blists - more mailing lists