[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180723164910.GS31540@mellanox.com>
Date: Mon, 23 Jul 2018 10:49:10 -0600
From: Jason Gunthorpe <jgg@...lanox.com>
To: Max Gurtovoy <maxg@...lanox.com>
Cc: Steve Wise <swise@...ngridcomputing.com>,
'Sagi Grimberg' <sagi@...mberg.me>,
'Leon Romanovsky' <leon@...nel.org>,
'Doug Ledford' <dledford@...hat.com>,
'RDMA mailing list' <linux-rdma@...r.kernel.org>,
'Saeed Mahameed' <saeedm@...lanox.com>,
'linux-netdev' <netdev@...r.kernel.org>
Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:
>
> >>>[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
> >>
> >>queue 9 is not mapped (overlap).
> >>please try the bellow:
> >>
> >
> >This seems to work. Here are three mapping cases: each vector on its
> >own cpu, each vector on 1 cpu within the local numa node, and each
> >vector having all cpus in its numa node. The 2nd mapping looks kinda
> >funny, but I think it achieved what you wanted? And all the cases
> >resulted in successful connections.
> >
>
> Thanks for testing this.
> I slightly improved the setting of the left CPUs and actually used Sagi's
> initial proposal.
>
> Sagi,
> please review the attached patch and let me know if I should add your
> signature on it.
> I'll run some perf test early next week on it (meanwhile I run login/logout
> with different num_queues successfully and irq settings).
>
> Steve,
> It will be great if you can apply the attached in your system and send your
> findings.
>
> Regards,
> Max,
So the conlusion to this thread is that Leon's mlx5 patch needs to wait
until this block-mq patch is accepted?
Thanks,
Jason
Powered by blists - more mailing lists