lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1D8E6B14-3336-42B3-B572-596DD2183D89@oracle.com>
Date:   Thu, 13 Jun 2019 19:39:24 +0200
From:   Håkon Bugge <haakon.bugge@...cle.com>
To:     Jason Gunthorpe <jgg@...pe.ca>
Cc:     Doug Ledford <dledford@...hat.com>,
        Leon Romanovsky <leon@...nel.org>,
        Parav Pandit <parav@...lanox.com>,
        Steve Wise <swise@...ngridcomputing.com>,
        OFED mailing list <linux-rdma@...r.kernel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] RDMA/cma: Make CM response timeout and # CM retries
 configurable



> On 13 Jun 2019, at 19:23, Jason Gunthorpe <jgg@...pe.ca> wrote:
> 
> On Thu, Jun 13, 2019 at 06:58:30PM +0200, Håkon Bugge wrote:
> 
>> If you refer to the backlog parameter in rdma_listen(), I cannot see
>> it being used at all for IB.
>> 
>> For CX-3, which is paravirtualized wrt. MAD packets, it is the proxy
>> UD receive queue length for the PF driver that can be construed as a
>> backlog. 
> 
> No, in IB you can drop UD packets if your RQ is full - so the proxy RQ
> is really part of the overall RQ on QP1.
> 
> The backlog starts once packets are taken off the RQ and begin the
> connection accept processing.

Do think we say the same thing. If, incoming REQ processing is severly delayed, the backlog is #entries in the QP1 receive queue in the PF. I can call rdma_listen() with a backlog of a zillion, but it will not help.

>> Customer configures #VMs and different workload may lead to way
>> different number of CM connections. The proxying of MAD packet
>> through the PF driver has a finite packet rate. With 64 VMs, 10.000
>> QPs on each, all going down due to a switch failing or similar, you
>> have 640.000 DREQs to be sent, and with the finite packet rate of MA
>> packets through the PF, this takes more than the current CM
>> timeout. And then you re-transmit and increase the burden of the PF
>> proxying.
> 
> I feel like the performance of all this proxying is too low to support
> such a large work load :(

That is what I am aiming at, for example to spread the completion_vector(s) for said QPs ;-)

-h

> 
> Can it be improved?
> 
> Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ