[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e586118ad154204ad2e2cf2c1391b916cb4ee54.camel@redhat.com>
Date: Thu, 13 Jun 2019 16:25:15 -0400
From: Doug Ledford <dledford@...hat.com>
To: Håkon Bugge <haakon.bugge@...cle.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Leon Romanovsky <leon@...nel.org>,
Parav Pandit <parav@...lanox.com>,
Steve Wise <swise@...ngridcomputing.com>,
OFED mailing list <linux-rdma@...r.kernel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] RDMA/cma: Make CM response timeout and # CM retries
configurable
On Thu, 2019-06-13 at 18:58 +0200, Håkon Bugge wrote:
> > On 13 Jun 2019, at 16:25, Doug Ledford <dledford@...hat.com> wrote:
> >
> > On Tue, 2019-02-26 at 08:57 +0100, Håkon Bugge wrote:
> > > During certain workloads, the default CM response timeout is too
> > > short, leading to excessive retries. Hence, make it configurable
> > > through sysctl. While at it, also make number of CM retries
> > > configurable.
> > >
> > > The defaults are not changed.
> > >
> > > Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
> > > ---
> > > v1 -> v2:
> > > * Added unregister_net_sysctl_table() in cma_cleanup()
> > > ---
> > > drivers/infiniband/core/cma.c | 52
> > > ++++++++++++++++++++++++++++++---
> > > --
> > > 1 file changed, 45 insertions(+), 7 deletions(-)
> >
> > This has been sitting on patchworks since forever. Presumably
> > because
> > Jason and I neither one felt like we really wanted it, but also
> > couldn't justify flat refusing it.
>
> I thought the agreement was to use NL and iproute2. But I haven't had
> the capacity.
To be fair, the email thread was gone from my linux-rdma folder. So, I
just had to review the entry in patchworks, and there was no captured
discussion there. So, if the agreement was made, it must have been
face to face some time and if I was involed, I had certainly forgotten
by now. But I still needed to clean up patchworks, hence my email ;-).
> > Well, I've made up my mind, so
> > unless Jason wants to argue the other side, I'm rejecting this
> > patch.
> > Here's why. The whole concept of a timeout is to help recovery in
> > a
> > situation that overloads one end of the connection. There is a
> > relationship between the max queue backlog on the one host and the
> > timeout on the other host.
>
> If you refer to the backlog parameter in rdma_listen(), I cannot see
> it being used at all for IB.
No, not exactly. I was more referring to heavy load causing an
overflow in the mad packet receive processing. We have
IB_MAD_QP_RECV_SIZE set to 512 by default, but it can be changed at
module load time of the ib_core module and that represents the maximum
number of backlogged mad packets we can have waiting to be processed
before we just drop them on the floor. There can be other places to
drop them too, but this is the one I was referring to.
> For CX-3, which is paravirtualized wrt. MAD packets, it is the proxy
> UD receive queue length for the PF driver that can be construed as a
> backlog. Remember that any MAD packet being sent from a VF or the PF
> itself, is sent to a proxy UD QP in the PF. Those packets are then
> multiplexed out on the real QP0/1. Incoming MAD packets are
> demultiplexed and sent once more to the proxy QP in the VF.
>
> > Generally, in order for a request to get
> > dropped and us to need to retransmit, the queue must already have a
> > full backlog. So, how long does it take a heavily loaded system to
> > process a full backlog? That, plus a fuzz for a margin of error,
> > should be our timeout. We shouldn't be asking users to configure
> > it.
>
> Customer configures #VMs and different workload may lead to way
> different number of CM connections. The proxying of MAD packet
> through the PF driver has a finite packet rate. With 64 VMs, 10.000
> QPs on each, all going down due to a switch failing or similar, you
> have 640.000 DREQs to be sent, and with the finite packet rate of MA
> packets through the PF, this takes more than the current CM timeout.
> And then you re-transmit and increase the burden of the PF proxying.
>
> So, we can change the default to cope with this. But, a MAD packet is
> unreliable, we may have transient loss. In this case, we want a short
> timeout.
>
> > However, if users change the default backlog queue on their
> > systems,
> > *then* it would make sense to have the users also change the
> > timeout
> > here, but I think guidance would be helpful.
> >
> > So, to revive this patch, what I'd like to see is some attempt to
> > actually quantify a reasonable timeout for the default backlog
> > depth,
> > then the patch should actually change the default to that
> > reasonable
> > timeout, and then put in the ability to adjust the timeout with
> > some
> > sort of doc guidance on how to calculate a reasonable timeout based
> > on
> > configured backlog depth.
>
> I can agree to this :-)
>
>
> Thxs, Håkon
>
> > --
> > Doug Ledford <dledford@...hat.com>
> > GPG KeyID: B826A3330E572FDD
> > Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57
> > 2FDD
--
Doug Ledford <dledford@...hat.com>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57
2FDD
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists