[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250929000001.1752206-1-pasic@linux.ibm.com>
Date: Mon, 29 Sep 2025 01:59:59 +0200
From: Halil Pasic <pasic@...ux.ibm.com>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>,
"D. Wythe" <alibuda@...ux.alibaba.com>,
Dust Li <dust.li@...ux.alibaba.com>,
Sidraya Jayagond <sidraya@...ux.ibm.com>,
Wenjia Zhang <wenjia@...ux.ibm.com>,
Mahanta Jambigi <mjambigi@...ux.ibm.com>,
Tony Lu <tonylu@...ux.alibaba.com>, Wen Gu <guwen@...ux.alibaba.com>,
Guangguan Wang <guangguan.wang@...ux.alibaba.com>,
Halil Pasic <pasic@...ux.ibm.com>, netdev@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-s390@...r.kernel.org
Subject: [PATCH net-next v5 0/2] net/smc: make wr buffer count configurable
The current value of SMC_WR_BUF_CNT is 16 which leads to heavy
contention on the wr_tx_wait workqueue of the SMC-R linkgroup and its
spinlock when many connections are competing for the work request
buffers. Currently up to 256 connections per linkgroup are supported.
To make things worse when finally a buffer becomes available and
smc_wr_tx_put_slot() signals the linkgroup's wr_tx_wait wq, because
WQ_FLAG_EXCLUSIVE is not used all the waiters get woken up, most of the
time a single one can proceed, and the rest is contending on the
spinlock of the wq to go to sleep again.
Addressing this by simply bumping SMC_WR_BUF_CNT to 256 was deemed
risky, because the large-ish physically continuous allocation could fail
and lead to TCP fall-backs. For reference see this discussion thread on
"[PATCH net-next] net/smc: increase SMC_WR_BUF_CNT" (in archive
https://lists.openwall.net/netdev/2024/11/05/186), which concludes with
the agreement to try to come up with something smarter, which is what
this series aims for.
Additionally if for some reason it is known that heavy contention is not
to be expected going with something like 256 work request buffers is
wasteful. To address these concerns make the number of work requests
configurable, and introduce a back-off logic with handles -ENOMEM form
smc_wr_alloc_link_mem() gracefully.
---
Changelog:
---------
v5:
* Added back a code comment about the value of qp_attr.cap.max_send_wr
after Dust Li's explanation of the comment itsef and the logic behind
it, and removed the paragraph from the commit message that concerns
itself with the removal of that comment. (Dust Li)
v4: https://lore.kernel.org/netdev/20250927232144.3478161-1-pasic@linux.ibm.com/
v3: https://lore.kernel.org/netdev/20250921214440.325325-1-pasic@linux.ibm.com/
v2: https://lore.kernel.org/netdev/20250908220150.3329433-1-pasic@linux.ibm.com/
v1: https://lore.kernel.org/all/20250904211254.1057445-1-pasic@linux.ibm.com/
Halil Pasic (2):
net/smc: make wr buffer count configurable
net/smc: handle -ENOMEM from smc_wr_alloc_link_mem gracefully
Documentation/networking/smc-sysctl.rst | 40 +++++++++++++++++++++++++
include/net/netns/smc.h | 2 ++
net/smc/smc_core.c | 34 ++++++++++++++-------
net/smc/smc_core.h | 8 +++++
net/smc/smc_ib.c | 10 +++----
net/smc/smc_llc.c | 2 ++
net/smc/smc_sysctl.c | 22 ++++++++++++++
net/smc/smc_sysctl.h | 2 ++
net/smc/smc_wr.c | 31 +++++++++----------
net/smc/smc_wr.h | 2 --
10 files changed, 121 insertions(+), 32 deletions(-)
base-commit: e835faaed2f80ee8652f59a54703edceab04f0d9
--
2.48.1
Powered by blists - more mailing lists