lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241113071405.67421-3-alibuda@linux.alibaba.com>
Date: Wed, 13 Nov 2024 15:14:04 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com>
To: kgraul@...ux.ibm.com,
	wenjia@...ux.ibm.com,
	jaka@...ux.ibm.com,
	wintera@...ux.ibm.com,
	guwen@...ux.alibaba.com
Cc: kuba@...nel.org,
	davem@...emloft.net,
	netdev@...r.kernel.org,
	linux-s390@...r.kernel.org,
	linux-rdma@...r.kernel.org,
	tonylu@...ux.alibaba.com,
	pabeni@...hat.com,
	edumazet@...gle.com
Subject: [PATCH net-next 2/3] net/smc: reduce locks scope of smc_xxx_lgr_pending

To reduce locks scope of smc_xxx_lgr_pending, who aim to serialize
the creation of link group. However, once link group existed already,
those locks are meaningless, worse still, they make incoming connections
have to be queued one after the other.

As an optimization, Once we found that we have reused an existing link group,
we can immediately release the lock. This way, only the first contact connection
needs to hold the global lock throughout its entire lifecycle, while the
NON first contact only needing to hold it until the end of smc_conn_create.
This greatly alleviates the bottleneck of establishing
connections in SMC.

Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
---
 net/smc/smc_core.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 500952c2e67b..5559a8218bd9 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -1951,8 +1951,8 @@ static bool smcd_lgr_match(struct smc_link_group *lgr,
 	return true;
 }
 
-/* create a new SMC connection (and a new link group if necessary) */
-int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+static int __smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini,
+			     bool unlock_with_existed_lgr)
 {
 	struct smc_connection *conn = &smc->conn;
 	struct net *net = sock_net(&smc->sk);
@@ -2026,7 +2026,10 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
 			smc_lgr_cleanup_early(lgr);
 			goto out;
 		}
+	} else if (unlock_with_existed_lgr) {
+		smc_lgr_pending_unlock(ini);
 	}
+
 	smc_lgr_hold(conn->lgr); /* lgr_put in smc_conn_free() */
 	if (!conn->lgr->is_smcd)
 		smcr_link_hold(conn->lnk); /* link_put in smc_conn_free() */
@@ -2050,6 +2053,16 @@ int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
 	return rc;
 }
 
+/* create a new SMC connection (and a new link group if necessary) */
+int smc_conn_create(struct smc_sock *smc, struct smc_init_info *ini)
+{
+	/* Considering that the path for SMC-D is shorter than SMC-R
+	 * the impact of global locking is smaller. So, let's make no
+	 * change on SMC-D.
+	 */
+	return __smc_conn_create(smc, ini, !ini->is_smcd);
+}
+
 #define SMCD_DMBE_SIZES		6 /* 0 -> 16KB, 1 -> 32KB, .. 6 -> 1MB */
 #define SMCR_RMBE_SIZES		15 /* 0 -> 16KB, 1 -> 32KB, .. 15 -> 512MB */
 
-- 
2.45.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ