[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220304091719.48340-1-dust.li@linux.alibaba.com>
Date: Fri, 4 Mar 2022 17:17:19 +0800
From: Dust Li <dust.li@...ux.alibaba.com>
To: Karsten Graul <kgraul@...ux.ibm.com>, davem@...emloft.net,
kuba@...nel.org
Cc: Guangguan Wang <guangguan.wang@...ux.alibaba.com>,
Leon Romanovsky <leon@...nel.org>, netdev@...r.kernel.org,
linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: [PATCH net-next] Revert "net/smc: don't req_notify until all CQEs drained"
This reverts commit a505cce6f7cfaf2aa2385aab7286063c96444526.
Leon says:
We already discussed that. SMC should be changed to use
RDMA CQ pool API
drivers/infiniband/core/cq.c.
ib_poll_handler() has much better implementation (tracing,
IRQ rescheduling, proper error handling) than this SMC variant.
Since we will switch to ib_poll_handler() in the future,
revert this patch.
Link: https://lore.kernel.org/netdev/20220301105332.GA9417@linux.alibaba.com/
Suggested-by: Leon Romanovsky <leon@...nel.org>
Suggested-by: Karsten Graul <kgraul@...ux.ibm.com>
Signed-off-by: Dust Li <dust.li@...ux.alibaba.com>
---
net/smc/smc_wr.c | 49 +++++++++++++++++++++---------------------------
1 file changed, 21 insertions(+), 28 deletions(-)
diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
index 34d616406d51..24be1d03fef9 100644
--- a/net/smc/smc_wr.c
+++ b/net/smc/smc_wr.c
@@ -137,28 +137,25 @@ static void smc_wr_tx_tasklet_fn(struct tasklet_struct *t)
{
struct smc_ib_device *dev = from_tasklet(dev, t, send_tasklet);
struct ib_wc wc[SMC_WR_MAX_POLL_CQE];
- int i, rc;
+ int i = 0, rc;
+ int polled = 0;
again:
+ polled++;
do {
memset(&wc, 0, sizeof(wc));
rc = ib_poll_cq(dev->roce_cq_send, SMC_WR_MAX_POLL_CQE, wc);
+ if (polled == 1) {
+ ib_req_notify_cq(dev->roce_cq_send,
+ IB_CQ_NEXT_COMP |
+ IB_CQ_REPORT_MISSED_EVENTS);
+ }
+ if (!rc)
+ break;
for (i = 0; i < rc; i++)
smc_wr_tx_process_cqe(&wc[i]);
- if (rc < SMC_WR_MAX_POLL_CQE)
- /* If < SMC_WR_MAX_POLL_CQE, the CQ should have been
- * drained, no need to poll again. --Guangguan Wang
- */
- break;
} while (rc > 0);
-
- /* IB_CQ_REPORT_MISSED_EVENTS make sure if ib_req_notify_cq() returns
- * 0, it is safe to wait for the next event.
- * Else we must poll the CQ again to make sure we won't miss any event
- */
- if (ib_req_notify_cq(dev->roce_cq_send,
- IB_CQ_NEXT_COMP |
- IB_CQ_REPORT_MISSED_EVENTS))
+ if (polled == 1)
goto again;
}
@@ -481,28 +478,24 @@ static void smc_wr_rx_tasklet_fn(struct tasklet_struct *t)
{
struct smc_ib_device *dev = from_tasklet(dev, t, recv_tasklet);
struct ib_wc wc[SMC_WR_MAX_POLL_CQE];
+ int polled = 0;
int rc;
again:
+ polled++;
do {
memset(&wc, 0, sizeof(wc));
rc = ib_poll_cq(dev->roce_cq_recv, SMC_WR_MAX_POLL_CQE, wc);
- if (rc > 0)
- smc_wr_rx_process_cqes(&wc[0], rc);
- if (rc < SMC_WR_MAX_POLL_CQE)
- /* If < SMC_WR_MAX_POLL_CQE, the CQ should have been
- * drained, no need to poll again. --Guangguan Wang
- */
+ if (polled == 1) {
+ ib_req_notify_cq(dev->roce_cq_recv,
+ IB_CQ_SOLICITED_MASK
+ | IB_CQ_REPORT_MISSED_EVENTS);
+ }
+ if (!rc)
break;
+ smc_wr_rx_process_cqes(&wc[0], rc);
} while (rc > 0);
-
- /* IB_CQ_REPORT_MISSED_EVENTS make sure if ib_req_notify_cq() returns
- * 0, it is safe to wait for the next event.
- * Else we must poll the CQ again to make sure we won't miss any event
- */
- if (ib_req_notify_cq(dev->roce_cq_recv,
- IB_CQ_SOLICITED_MASK |
- IB_CQ_REPORT_MISSED_EVENTS))
+ if (polled == 1)
goto again;
}
--
2.19.1.3.ge56e4f7
Powered by blists - more mailing lists