lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <5D80856A-8EF5-4320-B525-F8B28758CAFD@oracle.com>
Date:   Tue, 11 Jun 2019 16:55:44 +0200
From:   Håkon Bugge <haakon.bugge@...cle.com>
To:     Jason Gunthorpe <jgg@...pe.ca>
Cc:     Yishai Hadas <yishaih@...lanox.com>,
        Doug Ledford <dledford@...hat.com>, jackm@....mellanox.co.il,
        majd@...lanox.com, OFED mailing list <linux-rdma@...r.kernel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] RDMA/mlx4: Spread completion vectors for proxy CQs



> On 10 Jun 2019, at 19:53, Jason Gunthorpe <jgg@...pe.ca> wrote:
> 
> On Mon, Feb 18, 2019 at 07:33:02PM +0100, Håkon Bugge wrote:
>> MAD packet sending/receiving is not properly virtualized in
>> CX-3. Hence, these are proxied through the PF driver. The proxying
>> uses UD QPs. The associated CQs are created with completion vector
>> zero.
>> 
>> This leads to great imbalance in CPU processing, in particular during
>> heavy RDMA CM traffic.
>> 
>> Solved by selecting the completion vector on a round-robin base.
>> 
>> The imbalance can be demonstrated in a bare-metal environment, where
>> two nodes have instantiated 8 VFs each. This using dual ported HCAs,
>> so we have 16 vPorts per physical server.
>> 
>> 64 processes are associated with each vPort and creates and destroys
>> one QP for each of the remote 64 processes. That is, 1024 QPs per
>> vPort, all in all 16K QPs. The QPs are created/destroyed using the
>> CM.
>> 
>> Before this commit, we have (excluding all completion IRQs with zero
>> interrupts):
>> 
>> 396: mlx4-1@...0:94:00.0 199126
>> 397: mlx4-2@...0:94:00.0 1
>> 
>> With this commit:
>> 
>> 396: mlx4-1@...0:94:00.0 12568
>> 397: mlx4-2@...0:94:00.0 50772
>> 398: mlx4-3@...0:94:00.0 10063
>> 399: mlx4-4@...0:94:00.0 50753
>> 400: mlx4-5@...0:94:00.0 6127
>> 401: mlx4-6@...0:94:00.0 6114
>> []
>> 414: mlx4-19@...0:94:00.0 6122
>> 415: mlx4-20@...0:94:00.0 6117
>> 
>> The added pr_info shows:
>> 
>> create_pv_resources: slave:0 port:1, vector:0, num_comp_vectors:62
>> create_pv_resources: slave:0 port:1, vector:1, num_comp_vectors:62
>> create_pv_resources: slave:0 port:2, vector:2, num_comp_vectors:62
>> create_pv_resources: slave:0 port:2, vector:3, num_comp_vectors:62
>> create_pv_resources: slave:1 port:1, vector:4, num_comp_vectors:62
>> create_pv_resources: slave:1 port:2, vector:5, num_comp_vectors:62
>> []
>> create_pv_resources: slave:8 port:2, vector:18, num_comp_vectors:62
>> create_pv_resources: slave:8 port:1, vector:19, num_comp_vectors:62
>> 
>> Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
>> ---
>> drivers/infiniband/hw/mlx4/mad.c | 4 ++++
>> 1 file changed, 4 insertions(+)
> 
> This has been on patchworks for too long. Is it still relevant, or
> were you going to respin this with Chuck's 'least loaded' idea?

Let me send a commit based on the least loaded idea this week.


Thxs, Håkon

> 
> Thanks,
> Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ