[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022191715.157755-4-achender@kernel.org>
Date: Wed, 22 Oct 2025 12:17:03 -0700
From: Allison Henderson <achender@...nel.org>
To: netdev@...r.kernel.org
Subject: [RFC 03/15] net/rds: Change return code from rds_send_xmit() when lock is taken
From: Håkon Bugge <haakon.bugge@...cle.com>
Change the return code from rds_send_xmit() when it is unable to
acquire the RDS_IN_XMIT lock-bit from -ENOMEM to -EBUSY. This to
avoid re-queuing of the rds_send_worker() when someone else is
actually executing rds_send_xmit().
Performance is improved by 2% running rds-stress with the following
parameters: "-t 16 -d 32 -q 64 -a 64 -o". The test was run five times,
each time running for one minute, and the arithmetic average of the tx
IOPS was used as performance metric.
Send lock contention was reduced by 6.5% and the ib_tx_ring_full
condition was more than doubled, indicating better ability to send.
Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
Signed-off-by: Allison Henderson <allison.henderson@...cle.com>
---
net/rds/send.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/rds/send.c b/net/rds/send.c
index ed8d84a74c34..0ff100dcc7f5 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -158,7 +158,7 @@ int rds_send_xmit(struct rds_conn_path *cp)
*/
if (!acquire_in_xmit(cp)) {
rds_stats_inc(s_send_lock_contention);
- ret = -ENOMEM;
+ ret = -EBUSY;
goto out;
}
@@ -1374,7 +1374,7 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
rds_stats_inc(s_send_queued);
ret = rds_send_xmit(cpath);
- if (ret == -ENOMEM || ret == -EAGAIN) {
+ if (ret == -ENOMEM || ret == -EAGAIN || ret == -EBUSY) {
ret = 0;
rcu_read_lock();
if (rds_destroy_pending(cpath->cp_conn))
--
2.43.0
Powered by blists - more mailing lists