lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20231124073134.2043605-1-xu.xin16@zte.com.cn> Date: Fri, 24 Nov 2023 07:31:34 +0000 From: xu <xu.xin.sc@...il.com> To: tung.q.nguyen@...tech.com.au Cc: davem@...emloft.net, jmaloy@...hat.com, linux-kernel@...r.kernel.org, netdev@...r.kernel.org, tipc-discussion@...ts.sourceforge.net, xu.xin.sc@...il.com, xu.xin16@....com.cn, yang.yang29@....com.cn, ying.xue@...driver.com, zhang.yunkai@....com.cn Subject: RE: [RFC PATCH] net/tipc: reduce tipc_node lock holding time in tipc_rcv >>Happy to see your reply. But Why? 'delete' is false from tipc_node_timeout(). Refer to: >>https://elixir.bootlin.com/linux/v6.7-rc2/source/net/tipc/node.c#L844 >I should have explained it clearly: >1/ link status must be protected. >tipc_node_timeout() > tipc_node_link_down() > { > struct tipc_link *l = le->link; > > ... > __tipc_node_link_down(); <-- link status is referred. > ... > if (delete) { > kfree(l); > le->link = NULL; > } > ... > } > >__tipc_node_link_down() >{ > ... > if (!l || tipc_link_is_reset(l)) <-- read link status > ... > tipc_link_reset(l); <--- this function will reset all things related to link. >} > >2/ le->link must be protected. >bearer_disable() >{ > ... > tipc_node_delete_links(net, bearer_id); <--- this will delete all links. > ... >} > >tipc_node_delete_links() >{ > ... > tipc_node_link_down(n, bearer_id, true); > ... >} Could we please solve the problem mentioned above by adding spinlock(&le->lock)? For example: (BTW, I have tested it, with this change, enabling RPS based on tipc port can improve 25% of general throughput) diff --git a/net/tipc/node.c b/net/tipc/node.c index 3105abe97bb9..470c272d798e 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -1079,12 +1079,16 @@ static void tipc_node_link_down(struct tipc_node *n, int bearer_id, bool delete) __tipc_node_link_down(n, &bearer_id, &xmitq, &maddr); } else { /* Defuse pending tipc_node_link_up() */ + spin_lock_bh(&le->lock); tipc_link_reset(l); + spin_unlock_bh(&le->lock); tipc_link_fsm_evt(l, LINK_RESET_EVT); } if (delete) { + spin_lock_bh(&le->lock); kfree(l); le->link = NULL; + spin_unlock_bh(&le->lock); n->link_cnt--; } trace_tipc_node_link_down(n, true, "node link down or deleted!"); @@ -2154,14 +2158,15 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b) /* Receive packet directly if conditions permit */ tipc_node_read_lock(n); if (likely((n->state == SELF_UP_PEER_UP) && (usr != TUNNEL_PROTOCOL))) { + tipc_node_read_unlock(n); spin_lock_bh(&le->lock); if (le->link) { rc = tipc_link_rcv(le->link, skb, &xmitq); skb = NULL; } spin_unlock_bh(&le->lock); - } - tipc_node_read_unlock(n); + } else + tipc_node_read_unlock(n); /* Check/update node state before receiving */ if (unlikely(skb)) { @@ -2169,12 +2174,13 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b) goto out_node_put; tipc_node_write_lock(n); if (tipc_node_check_state(n, skb, bearer_id, &xmitq)) { + tipc_node_write_unlock(n); if (le->link) { rc = tipc_link_rcv(le->link, skb, &xmitq); skb = NULL; } - } - tipc_node_write_unlock(n); + } else + tipc_node_write_unlock(n); } if (unlikely(rc & TIPC_LINK_UP_EVT))
Powered by blists - more mailing lists