[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201103203256.087374718@linuxfoundation.org>
Date: Tue, 3 Nov 2020 21:35:09 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Chuck Lever <chuck.lever@...cle.com>,
Anna Schumaker <Anna.Schumaker@...app.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.4 061/214] SUNRPC: Mitigate cond_resched() in xprt_transmit()
From: Chuck Lever <chuck.lever@...cle.com>
[ Upstream commit 6f9f17287e78e5049931af2037b15b26d134a32a ]
The original purpose of this expensive call is to prevent a long
queue of requests from blocking other work.
The cond_resched() call is unnecessary after just a single send
operation.
For longer queues, instead of invoking the kernel scheduler, simply
release the transport send lock and return to the RPC scheduler.
Signed-off-by: Chuck Lever <chuck.lever@...cle.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@...app.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
net/sunrpc/xprt.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 41df4c507193b..a6fee86f400ec 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -1503,10 +1503,13 @@ xprt_transmit(struct rpc_task *task)
{
struct rpc_rqst *next, *req = task->tk_rqstp;
struct rpc_xprt *xprt = req->rq_xprt;
- int status;
+ int counter, status;
spin_lock(&xprt->queue_lock);
+ counter = 0;
while (!list_empty(&xprt->xmit_queue)) {
+ if (++counter == 20)
+ break;
next = list_first_entry(&xprt->xmit_queue,
struct rpc_rqst, rq_xmit);
xprt_pin_rqst(next);
@@ -1514,7 +1517,6 @@ xprt_transmit(struct rpc_task *task)
status = xprt_request_transmit(next, task);
if (status == -EBADMSG && next != req)
status = 0;
- cond_resched();
spin_lock(&xprt->queue_lock);
xprt_unpin_rqst(next);
if (status == 0) {
--
2.27.0
Powered by blists - more mailing lists