[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151123200136.GA5640@obsidianresearch.com>
Date: Mon, 23 Nov 2015 13:01:36 -0700
From: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
To: Christoph Hellwig <hch@....de>
Cc: linux-rdma@...r.kernel.org, sagig@....mellanox.co.il,
bart.vanassche@...disk.com, axboe@...com,
linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/9] IB: add a proper completion queue abstraction
On Sat, Nov 14, 2015 at 08:08:49AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 13, 2015 at 11:25:13AM -0700, Jason Gunthorpe wrote:
> > For instance, like this, not fulling draining the cq and then doing:
> >
> > > + completed = __ib_process_cq(cq, budget);
> > > + if (completed < budget) {
> > > + irq_poll_complete(&cq->iop);
> > > + if (ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0) {
> >
> > Doesn't seem entirely right? There is no point in calling
> > ib_req_notify_cq if the code knows there is still stuff in the CQ and
> > has already, independently, arranged for ib_poll_hander to be
> > guarenteed called.
>
> The code only calls ib_req_notify_cq if it knowns we finished earlier than
> our budget.
Okay, having now read the whole thing, I think I see the flow now. I don't
see any holes in the above, other than it is doing a bit more work
than it needs in some edges cases because it doesn't know if the CQ is
actually empty or not.
> > > + completed = __ib_process_cq(cq, IB_POLL_BUDGET_WORKQUEUE);
> > > + if (completed >= IB_POLL_BUDGET_WORKQUEUE ||
> > > + ib_req_notify_cq(cq, IB_POLL_FLAGS) > 0)
> > > + queue_work(ib_comp_wq, &cq->work);
> >
> > Same comment here..
>
> Same here - we only requeue the work item if either we processed all of
> our budget, or ib_req_notify_cq with IB_CQ_REPORT_MISSED_EVENTS told
> us that we need to poll again.
I find the if construction hard to read, but yes, it looks OK.
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists