[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231123024510.2037882-1-xu.xin.sc@gmail.com>
Date: Thu, 23 Nov 2023 02:45:10 +0000
From: xu.xin.sc@...il.com
To: jmaloy@...hat.com, ying.xue@...driver.com, davem@...emloft.net
Cc: netdev@...r.kernel.org, tipc-discussion@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, xu.xin16@....com.cn,
xu xin <xu.xin.sc@...il.com>
Subject: [RFC PATCH] net/tipc: reduce tipc_node lock holding time in tipc_rcv
From: xu xin <xu.xin.sc@...il.com>
Background
==========
As we know, for now, TIPC doesn't support RPS balance based on the port
of tipc skb destination. The basic reason is the increased contention of
node lock when the packets from the same link are distributed to
different CPUs for processing, as mentioned in [1].
Questions to talk
=================
Does tipc_link_rcv() really need hold the tipc_node's read or write lock?
I tried to look through the procudure code of tipc_link_rcv, I didn't find
the reason why it needs the lock. If tipc_link_rcv does need it, Can anyone
tells me the reason, and if we can reduce it's holding time more.
Advantage
=========
If tipc_link_rcv does not need the lock, with this patch applied, enabling
RPS based destination port (my experimental code) can increase the tipc
throughput by approximately 25% (in a 4-core cpu).
[1] commit 08bfc9cb76e2 ("flow_dissector: add tipc support")
Signed-off-by: xu xin <xu.xin.sc@...il.com>
---
net/tipc/node.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/net/tipc/node.c b/net/tipc/node.c
index 3105abe97bb9..2a036b8a7da3 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -2154,14 +2154,15 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
/* Receive packet directly if conditions permit */
tipc_node_read_lock(n);
if (likely((n->state == SELF_UP_PEER_UP) && (usr != TUNNEL_PROTOCOL))) {
+ tipc_node_read_unlock(n);
spin_lock_bh(&le->lock);
if (le->link) {
rc = tipc_link_rcv(le->link, skb, &xmitq);
skb = NULL;
}
spin_unlock_bh(&le->lock);
- }
- tipc_node_read_unlock(n);
+ } else
+ tipc_node_read_unlock(n);
/* Check/update node state before receiving */
if (unlikely(skb)) {
@@ -2169,12 +2170,13 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
goto out_node_put;
tipc_node_write_lock(n);
if (tipc_node_check_state(n, skb, bearer_id, &xmitq)) {
+ tipc_node_write_unlock(n);
if (le->link) {
rc = tipc_link_rcv(le->link, skb, &xmitq);
skb = NULL;
}
- }
- tipc_node_write_unlock(n);
+ } else
+ tipc_node_write_unlock(n);
}
if (unlikely(rc & TIPC_LINK_UP_EVT))
--
2.15.2
Powered by blists - more mailing lists