[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id:
<176182801326.3832937.9219593539259854803.git-patchwork-notify@kernel.org>
Date: Thu, 30 Oct 2025 12:40:13 +0000
From: patchwork-bot+netdevbpf@...nel.org
To: Halil Pasic <pasic@...ux.ibm.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, horms@...nel.org, corbet@....net,
alibuda@...ux.alibaba.com, dust.li@...ux.alibaba.com, sidraya@...ux.ibm.com,
wenjia@...ux.ibm.com, mjambigi@...ux.ibm.com, tonylu@...ux.alibaba.com,
netdev@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-s390@...r.kernel.org, guwen@...ux.alibaba.com,
guangguan.wang@...ux.alibaba.com, bagasdotme@...il.com
Subject: Re: [PATCH net-next v6 0/2] net/smc: make wr buffer count
configurable
Hello:
This series was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@...hat.com>:
On Mon, 27 Oct 2025 23:48:54 +0100 you wrote:
> The current value of SMC_WR_BUF_CNT is 16 which leads to heavy
> contention on the wr_tx_wait workqueue of the SMC-R linkgroup and its
> spinlock when many connections are competing for the work request
> buffers. Currently up to 256 connections per linkgroup are supported.
>
> To make things worse when finally a buffer becomes available and
> smc_wr_tx_put_slot() signals the linkgroup's wr_tx_wait wq, because
> WQ_FLAG_EXCLUSIVE is not used all the waiters get woken up, most of the
> time a single one can proceed, and the rest is contending on the
> spinlock of the wq to go to sleep again.
>
> [...]
Here is the summary with links:
- [net-next,v6,1/2] net/smc: make wr buffer count configurable
https://git.kernel.org/netdev/net-next/c/aef3cdb47bbb
- [net-next,v6,2/2] net/smc: handle -ENOMEM from smc_wr_alloc_link_mem gracefully
https://git.kernel.org/netdev/net-next/c/8f736087e52f
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
Powered by blists - more mailing lists