[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241025235839.GD36583@linux.alibaba.com>
Date: Sat, 26 Oct 2024 07:58:39 +0800
From: Dust Li <dust.li@...ux.alibaba.com>
To: Wenjia Zhang <wenjia@...ux.ibm.com>, Wen Gu <guwen@...ux.alibaba.com>,
"D. Wythe" <alibuda@...ux.alibaba.com>,
Tony Lu <tonylu@...ux.alibaba.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, linux-s390@...r.kernel.org,
Heiko Carstens <hca@...ux.ibm.com>,
Jan Karcher <jaka@...ux.ibm.com>, Gerd Bayer <gbayer@...ux.ibm.com>,
Alexandra Winter <wintera@...ux.ibm.com>,
Halil Pasic <pasic@...ux.ibm.com>,
Nils Hoppmann <niho@...ux.ibm.com>,
Niklas Schnell <schnelle@...ux.ibm.com>,
Thorsten Winkler <twinkler@...ux.ibm.com>,
Karsten Graul <kgraul@...ux.ibm.com>,
Stefan Raspl <raspl@...ux.ibm.com>
Subject: Re: [PATCH net-next] net/smc: increase SMC_WR_BUF_CNT
On 2024-10-25 09:46:19, Wenjia Zhang wrote:
>From: Halil Pasic <pasic@...ux.ibm.com>
>
>The current value of SMC_WR_BUF_CNT is 16 which leads to heavy
>contention on the wr_tx_wait workqueue of the SMC-R linkgroup and its
>spinlock when many connections are competing for the buffer. Currently
>up to 256 connections per linkgroup are supported.
>
>To make things worse when finally a buffer becomes available and
>smc_wr_tx_put_slot() signals the linkgroup's wr_tx_wait wq, because
>WQ_FLAG_EXCLUSIVE is not used all the waiters get woken up, most of the
>time a single one can proceed, and the rest is contending on the
>spinlock of the wq to go to sleep again.
>
>For some reason include/linux/wait.h does not offer a top level wrapper
>macro for wait_event with interruptible, exclusive and timeout. I did
>not spend too many cycles on thinking if that is even a combination that
>makes sense (on the quick I don't see why not) and conversely I
>refrained from making an attempt to accomplish the interruptible,
>exclusive and timeout combo by using the abstraction-wise lower
>level __wait_event interface.
>
>To alleviate the tx performance bottleneck and the CPU overhead due to
>the spinlock contention, let us increase SMC_WR_BUF_CNT to 256.
Hi,
Have you tested other values, such as 64? In our internal version, we
have used 64 for some time.
Increasing this to 256 will require a 36K continuous physical memory
allocation in smc_wr_alloc_link_mem(). Based on my experience, this may
fail on servers that have been running for a long time and have
fragmented memory.
link->wr_rx_bufs = kcalloc(SMC_WR_BUF_CNT * 3, SMC_WR_BUF_SIZE,
GFP_KERNEL);
As we can see, the link->wr_rx_bufs will increase from 16*3*48 = 2,304
to 256*3*48=36,864 (1 page to 9 pages).
Best regards,
Dust
>
>Signed-off-by: Halil Pasic <pasic@...ux.ibm.com>
>Reported-by: Nils Hoppmann <niho@...ux.ibm.com>
>Reviewed-by: Wenjia Zhang <wenjia@...ux.ibm.com>
>Signed-off-by: Wenjia Zhang <wenjia@...ux.ibm.com>
>---
> net/smc/smc_wr.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h
>index f3008dda222a..81e772e241f3 100644
>--- a/net/smc/smc_wr.h
>+++ b/net/smc/smc_wr.h
>@@ -19,7 +19,7 @@
> #include "smc.h"
> #include "smc_core.h"
>
>-#define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */
>+#define SMC_WR_BUF_CNT 256 /* # of ctrl buffers per link */
>
> #define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ)
>
>--
>2.43.0
>
Powered by blists - more mailing lists