[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLUPR0501MB836640F5ECEAB3A99CD82BFC5F00@BLUPR0501MB836.namprd05.prod.outlook.com>
Date: Thu, 15 Sep 2016 00:07:29 +0000
From: Adit Ranadive <aditr@...are.com>
To: Yuval Shaia <yuval.shaia@...cle.com>
CC: "dledford@...hat.com" <dledford@...hat.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
pv-drivers <pv-drivers@...are.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"Jorgen S. Hansen" <jhansen@...are.com>,
Aditya Sarwade <asarwade@...are.com>,
George Zhang <georgezhang@...are.com>,
Bryan Tan <bryantan@...are.com>
Subject: Re: [PATCH v4 09/16] IB/pvrdma: Add support for Completion Queues
On Wed, Sep 14, 2016 at 05:43:37 -0700, Yuval Shaia wrote:
> On Sun, Sep 11, 2016 at 09:49:19PM -0700, Adit Ranadive wrote:
> > +
> > +static int pvrdma_poll_one(struct pvrdma_cq *cq, struct pvrdma_qp
> **cur_qp,
> > + struct ib_wc *wc)
> > +{
> > + struct pvrdma_dev *dev = to_vdev(cq->ibcq.device);
> > + int has_data;
> > + unsigned int head;
> > + bool tried = false;
> > + struct pvrdma_cqe *cqe;
> > +
> > +retry:
> > + has_data = pvrdma_idx_ring_has_data(&cq->ring_state->rx,
> > + cq->ibcq.cqe, &head);
> > + if (has_data == 0) {
> > + if (tried)
> > + return -EAGAIN;
> > +
> > + /* Pass down POLL to give physical HCA a chance to poll. */
> > + pvrdma_write_uar_cq(dev, cq->cq_handle |
> PVRDMA_UAR_CQ_POLL);
> > +
> > + tried = true;
> > + goto retry;
> > + } else if (has_data == PVRDMA_INVALID_IDX) {
>
> I didn't went throw the entire life cycle of RX-ring's head and tail but you
> need to make sure that PVRDMA_INVALID_IDX error is recoverable one, i.e
> there is probability that in the next call to pvrdma_poll_one it will be fine.
> Otherwise it is an endless loop.
We have never run into this issue internally but I don't think we can recover here
in the driver. The only way to recover would be to destroy and recreate the CQ
which we shouldn't do since it could be used by multiple QPs.
We don't have a way yet to recover in the device. Once we add that this check
should go away.
The reason I returned an error value from poll_cq in v3 was to break the possible
loop so that it might give clients a chance to recover. But since poll_cq is not expected
to fail I just log the device error here. I can revert to that version if you want to break
the possible loop.
Powered by blists - more mailing lists