lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 19 Feb 2019 12:39:02 -0500
From:   Chuck Lever <chuck.lever@...cle.com>
To:     Håkon Bugge <haakon.bugge@...cle.com>
Cc:     Yishai Hadas <yishaih@...lanox.com>,
        Doug Ledford <dledford@...hat.com>,
        Jason Gunthorpe <jgg@...pe.ca>, jackm@....mellanox.co.il,
        majd@...lanox.com, OFED mailing list <linux-rdma@...r.kernel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] RDMA/mlx4: Spread completion vectors for proxy CQs



> On Feb 19, 2019, at 12:32 PM, Håkon Bugge <haakon.bugge@...cle.com> wrote:
> 
> 
> 
>> On 19 Feb 2019, at 15:58, Chuck Lever <chuck.lever@...cle.com> wrote:
>> 
>> Hey Håkon-
>> 
>>> On Feb 18, 2019, at 1:33 PM, Håkon Bugge <haakon.bugge@...cle.com> wrote:
>>> 
>>> MAD packet sending/receiving is not properly virtualized in
>>> CX-3. Hence, these are proxied through the PF driver. The proxying
>>> uses UD QPs. The associated CQs are created with completion vector
>>> zero.
>>> 
>>> This leads to great imbalance in CPU processing, in particular during
>>> heavy RDMA CM traffic.
>>> 
>>> Solved by selecting the completion vector on a round-robin base.
>> 
>> I've got a similar patch for NFS and NFSD. I'm wondering if this
>> should be turned into a core helper, simple as it is. Perhaps
>> it would be beneficial if all participating ULPs used the same
>> global counter?
> 
> 
> A global counter works for this commit, because the QPs and associated CQs are (pretty) persistent. That is, VMs doesn't come and go that often.
> 
> In the more general ULP case, the usage model is probably a lot more intermittent. Hence, a least-load approach is probably better. That can be implemented in ib core. I've seen in the past an enum IB_CQ_USE_LEAST_LOAD_VECTOR for signalling this behaviour and define that to e.g. -1, that is, outside of 0..(num_comp_vectors-1).

Indeed, passing such a value to either ib_create_cq or ib_alloc_cq
could allow the compvec to be selected automatically. Using a
round-robin would be the first step towards something smarter, and
the ULPs need be none the wiser when more smart-i-tude eventually
comes along.


> But this mechanism doesn't know which CQs that delivers the most interrupts. We lack an ib_modify_cq() that may change the CQ to EQ association, to _really_ spread the interrupts, not the CQ to EQ association.
> 
> Anyway, Jason mentioned in a private email that maybe we could use the new completion API or something? I am not familiar with that one (yet).
> 
> Well, I can volunteer to do the least load approach in ib core and change all (plain stupid) zero comp_vectors in ULPs and core, if that seems like an interim approach.

Please update net/sunrpc/xprtrdma/{svc_rdma_,}transport.c as well.
It should be straightforward, and I'm happy to review and test as
needed.


> Thxs, Håkon
> 
> 
> 
> 
>> 
>> 
>>> The imbalance can be demonstrated in a bare-metal environment, where
>>> two nodes have instantiated 8 VFs each. This using dual ported HCAs,
>>> so we have 16 vPorts per physical server.
>>> 
>>> 64 processes are associated with each vPort and creates and destroys
>>> one QP for each of the remote 64 processes. That is, 1024 QPs per
>>> vPort, all in all 16K QPs. The QPs are created/destroyed using the
>>> CM.
>>> 
>>> Before this commit, we have (excluding all completion IRQs with zero
>>> interrupts):
>>> 
>>> 396: mlx4-1@...0:94:00.0 199126
>>> 397: mlx4-2@...0:94:00.0 1
>>> 
>>> With this commit:
>>> 
>>> 396: mlx4-1@...0:94:00.0 12568
>>> 397: mlx4-2@...0:94:00.0 50772
>>> 398: mlx4-3@...0:94:00.0 10063
>>> 399: mlx4-4@...0:94:00.0 50753
>>> 400: mlx4-5@...0:94:00.0 6127
>>> 401: mlx4-6@...0:94:00.0 6114
>>> []
>>> 414: mlx4-19@...0:94:00.0 6122
>>> 415: mlx4-20@...0:94:00.0 6117
>>> 
>>> The added pr_info shows:
>>> 
>>> create_pv_resources: slave:0 port:1, vector:0, num_comp_vectors:62
>>> create_pv_resources: slave:0 port:1, vector:1, num_comp_vectors:62
>>> create_pv_resources: slave:0 port:2, vector:2, num_comp_vectors:62
>>> create_pv_resources: slave:0 port:2, vector:3, num_comp_vectors:62
>>> create_pv_resources: slave:1 port:1, vector:4, num_comp_vectors:62
>>> create_pv_resources: slave:1 port:2, vector:5, num_comp_vectors:62
>>> []
>>> create_pv_resources: slave:8 port:2, vector:18, num_comp_vectors:62
>>> create_pv_resources: slave:8 port:1, vector:19, num_comp_vectors:62
>>> 
>>> Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
>>> ---
>>> drivers/infiniband/hw/mlx4/mad.c | 4 ++++
>>> 1 file changed, 4 insertions(+)
>>> 
>>> diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
>>> index 936ee1314bcd..300839e7f519 100644
>>> --- a/drivers/infiniband/hw/mlx4/mad.c
>>> +++ b/drivers/infiniband/hw/mlx4/mad.c
>>> @@ -1973,6 +1973,7 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
>>> {
>>> 	int ret, cq_size;
>>> 	struct ib_cq_init_attr cq_attr = {};
>>> +	static atomic_t comp_vect = ATOMIC_INIT(-1);
>>> 
>>> 	if (ctx->state != DEMUX_PV_STATE_DOWN)
>>> 		return -EEXIST;
>>> @@ -2002,6 +2003,9 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
>>> 		cq_size *= 2;
>>> 
>>> 	cq_attr.cqe = cq_size;
>>> +	cq_attr.comp_vector = atomic_inc_return(&comp_vect) % ibdev->num_comp_vectors;
>>> +	pr_info("slave:%d port:%d, vector:%d, num_comp_vectors:%d\n",
>>> +		slave, port, cq_attr.comp_vector, ibdev->num_comp_vectors);
>>> 	ctx->cq = ib_create_cq(ctx->ib_dev, mlx4_ib_tunnel_comp_handler,
>>> 			       NULL, ctx, &cq_attr);
>>> 	if (IS_ERR(ctx->cq)) {
>>> -- 
>>> 2.20.1
>>> 
>> 
>> --
>> Chuck Lever

--
Chuck Lever



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ