[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YekcWYwg399vR18R@unreal>
Date: Thu, 20 Jan 2022 10:24:57 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Guangguan Wang <guangguan.wang@...ux.alibaba.com>
Cc: kgraul@...ux.ibm.com, davem@...emloft.net, kuba@...nel.org,
linux-s390@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH net-next] net/smc: Introduce receive queue flow
control support
On Thu, Jan 20, 2022 at 02:51:40PM +0800, Guangguan Wang wrote:
> This implement rq flow control in smc-r link layer. QPs
> communicating without rq flow control, in the previous
> version, may result in RNR (reveive not ready) error, which
> means when sq sends a message to the remote qp, but the
> remote qp's rq has no valid rq entities to receive the message.
> In RNR condition, the rdma transport layer may retransmit
> the messages again and again until the rq has any entities,
> which may lower the performance, especially in heavy traffic.
> Using credits to do rq flow control can avoid the occurrence
> of RNR.
>
> Test environment:
> - CPU Intel Xeon Platinum 8 core, mem 32 GiB, nic Mellanox CX4.
> - redis benchmark 6.2.3 and redis server 6.2.3.
> - redis server: redis-server --save "" --appendonly no
> --protected-mode no --io-threads 7 --io-threads-do-reads yes
> - redis client: redis-benchmark -h 192.168.26.36 -q -t set,get
> -P 1 --threads 7 -n 2000000 -c 200 -d 10
>
> Before:
> SET: 205229.23 requests per second, p50=0.799 msec
> GET: 212278.16 requests per second, p50=0.751 msec
>
> After:
> SET: 623674.69 requests per second, p50=0.303 msec
> GET: 688326.00 requests per second, p50=0.271 msec
>
> The test of redis-benchmark shows that more than 3X rps
> improvement after the implementation of rq flow control.
>
> Signed-off-by: Guangguan Wang <guangguan.wang@...ux.alibaba.com>
> ---
> net/smc/af_smc.c | 12 ++++++
> net/smc/smc_cdc.c | 10 ++++-
> net/smc/smc_cdc.h | 3 +-
> net/smc/smc_clc.c | 3 ++
> net/smc/smc_clc.h | 3 +-
> net/smc/smc_core.h | 17 ++++++++-
> net/smc/smc_ib.c | 6 ++-
> net/smc/smc_llc.c | 92 +++++++++++++++++++++++++++++++++++++++++++++-
> net/smc/smc_llc.h | 5 +++
> net/smc/smc_wr.c | 30 ++++++++++++---
> net/smc/smc_wr.h | 54 ++++++++++++++++++++++++++-
> 11 files changed, 222 insertions(+), 13 deletions(-)
<...>
> + // set peer rq credits watermark, if less than init_credits * 2/3,
> + // then credit announcement is needed.
<...>
> + // set peer rq credits watermark, if less than init_credits * 2/3,
> + // then credit announcement is needed.
<...>
> + // credits have already been announced to peer
<...>
> + // set local rq credits high watermark to lnk->wr_rx_cnt / 3,
> + // if local rq credits more than high watermark, announcement is needed.
<...>
> +// get one tx credit, and peer rq credits dec
<...>
> +// put tx credits, when some failures occurred after tx credits got
> +// or receive announce credits msgs
> +static inline void smc_wr_tx_put_credits(struct smc_link *link, int credits, bool wakeup)
<...>
> +// to check whether peer rq credits is lower than watermark.
> +static inline int smc_wr_tx_credits_need_announce(struct smc_link *link)
<...>
> +// get local rq credits and set credits to zero.
> +// may called when announcing credits
> +static inline int smc_wr_rx_get_credits(struct smc_link *link)
Please try to use C-style comments.
Thanks
Powered by blists - more mailing lists