[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190206200229.00002e2f@dev.mellanox.co.il>
Date: Wed, 6 Feb 2019 20:02:29 +0200
From: jackm <jackm@....mellanox.co.il>
To: Håkon Bugge <haakon.bugge@...cle.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, netdev@...r.kernel.org,
OFED mailing list <linux-rdma@...r.kernel.org>,
rds-devel@....oracle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mlx4_ib: Increase the timeout for CM cache
On Wed, 6 Feb 2019 16:40:14 +0100
Håkon Bugge <haakon.bugge@...cle.com> wrote:
> Jack,
>
> A major contributor to the long processing time in the PF driver
> proxying QP1 packets is:
>
> create_pv_resources
> -> ib_create_cq(ctx->ib_dev, mlx4_ib_tunnel_comp_handler,
> NULL, ctx, cq_size, 0);
>
> That is, comp_vector is zero.
>
> Due to commit 6ba1eb776461 ("IB/mlx4: Scatter CQs to different EQs"),
> the zero comp_vector has the intent of let the mlx4_core driver
> select the least used vector.
>
> But, in mlx4_ib_create_cq(), we have:
>
> pr_info("eq_table: %p\n", dev->eq_table);
> if (dev->eq_table) {
> vector = dev->eq_table[mlx4_choose_vector(dev->dev,
> vector, ibdev->num_comp_vectors)];
> }
>
> cq->vector = vector;
>
> and dev->eq_table is NULL, so all the CQs for the proxy QPs get
> comp_vector zero.
>
> I have to make some reservations, as this analysis is based on uek4,
> but I think the code here is equal upstream, but need to double check.
>
>
> Thxs, Håkon
>
Hi Hakon and Jason,
I was ill today (bad cold, took antihistamines all day, which knocked
me out).
I'll get to this tomorrow.
-Jack
Powered by blists - more mailing lists