[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022191715.157755-3-achender@kernel.org>
Date: Wed, 22 Oct 2025 12:17:02 -0700
From: Allison Henderson <achender@...nel.org>
To: netdev@...r.kernel.org
Subject: [RFC 02/15] net/rds: Give each connection its own workqueue
From: Håkon Bugge <haakon.bugge@...cle.com>
RDS was written to require ordered workqueues for "cp->cp_wq":
Work is executed in the order scheduled, one item at a time.
If these workqueues are shared across connections,
then work executed on behalf of one connection blocks work
scheduled for a different and unrelated connection.
Luckily we don't need to share these workqueues.
While it obviously makes sense to limit the number of
workers (processes) that ought to be allocated on a system,
a workqueue that doesn't have a rescue worker attached,
has a tiny footprint compared to the connection as a whole:
A workqueue costs ~800 bytes, while an RDS/IB connection
totals ~5 MBytes.
So we're getting a signficant performance gain
(90% of connections fail over under 3 seconds vs. 40%)
for a less than 0.02% overhead.
RDS doesn't even benefit from the additional rescue workers:
of all the reasons that RDS blocks workers, allocation under
memory pressue is the least of our concerns.
And even if RDS was stalling due to the memory-reclaim process,
the work executed by the rescue workers are highly unlikely
to free up any memory.
If anything, they might try to allocate even more.
By giving each connection its own workqueues, we allow RDS
to better utilize the unbound workers that the system
has available.
Signed-off-by: Gerd Rausch <gerd.rausch@...cle.com>
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@...cle.com>
Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
Signed-off-by: Allison Henderson <allison.henderson@...cle.com>
---
net/rds/connection.c | 12 +++++++++++-
net/rds/ib.c | 5 +++++
net/rds/rds.h | 1 +
net/rds/threads.c | 1 +
4 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/net/rds/connection.c b/net/rds/connection.c
index dc7323707f45..ac555f02c045 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -269,7 +269,14 @@ static struct rds_connection *__rds_conn_create(struct net *net,
__rds_conn_path_init(conn, &conn->c_path[i],
is_outgoing);
conn->c_path[i].cp_index = i;
- conn->c_path[i].cp_wq = rds_wq;
+ conn->c_path[i].cp_wq = alloc_ordered_workqueue("krds_cp_wq#%lu/%d", 0,
+ rds_conn_count, i);
+ if (!conn->c_path[i].cp_wq) {
+ while (--i >= 0)
+ destroy_workqueue(conn->c_path[i].cp_wq);
+ conn = ERR_PTR(-ENOMEM);
+ goto out;
+ }
}
rcu_read_lock();
if (rds_destroy_pending(conn))
@@ -471,6 +478,9 @@ static void rds_conn_path_destroy(struct rds_conn_path *cp)
WARN_ON(work_pending(&cp->cp_down_w));
cp->cp_conn->c_trans->conn_free(cp->cp_transport_data);
+
+ destroy_workqueue(cp->cp_wq);
+ cp->cp_wq = NULL;
}
/*
diff --git a/net/rds/ib.c b/net/rds/ib.c
index 9826fe7f9d00..6694d31e6cfd 100644
--- a/net/rds/ib.c
+++ b/net/rds/ib.c
@@ -491,9 +491,14 @@ static int rds_ib_laddr_check(struct net *net, const struct in6_addr *addr,
static void rds_ib_unregister_client(void)
{
+ int i;
+
ib_unregister_client(&rds_ib_client);
/* wait for rds_ib_dev_free() to complete */
flush_workqueue(rds_wq);
+
+ for (i = 0; i < RDS_NMBR_CP_WQS; ++i)
+ flush_workqueue(rds_cp_wqs[i]);
}
static void rds_ib_set_unloading(void)
diff --git a/net/rds/rds.h b/net/rds/rds.h
index 11fa304f2164..3b7ac773208b 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -40,6 +40,7 @@
#ifdef ATOMIC64_INIT
#define KERNEL_HAS_ATOMIC64
#endif
+
#ifdef RDS_DEBUG
#define rdsdebug(fmt, args...) pr_debug("%s(): " fmt, __func__ , ##args)
#else
diff --git a/net/rds/threads.c b/net/rds/threads.c
index 639302bab51e..956811f8f764 100644
--- a/net/rds/threads.c
+++ b/net/rds/threads.c
@@ -33,6 +33,7 @@
#include <linux/kernel.h>
#include <linux/random.h>
#include <linux/export.h>
+#include <linux/workqueue.h>
#include "rds.h"
--
2.43.0
Powered by blists - more mailing lists