[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0f590cb7-9b67-4dce-93a4-5da89812a075@linux.ibm.com>
Date: Fri, 31 May 2024 11:03:18 +0200
From: Wenjia Zhang <wenjia@...ux.ibm.com>
To: Guangguan Wang <guangguan.wang@...ux.alibaba.com>, jaka@...ux.ibm.com,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com
Cc: kgraul@...ux.ibm.com, alibuda@...ux.alibaba.com, tonylu@...ux.alibaba.com,
guwen@...ux.alibaba.com, linux-s390@...r.kernel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next 0/2] Change the upper boundary of SMC-R's snd_buf
and rcv_buf to 512MB
On 31.05.24 10:15, Guangguan Wang wrote:
>
>
> On 2024/5/30 00:28, Wenjia Zhang wrote:
>>
>>
>> On 28.05.24 15:51, Guangguan Wang wrote:
>>> SMCR_RMBE_SIZES is the upper boundary of SMC-R's snd_buf and rcv_buf.
>>> The maximum bytes of snd_buf and rcv_buf can be calculated by 2^SMCR_
>>> RMBE_SIZES * 16KB. SMCR_RMBE_SIZES = 5 means the upper boundary is 512KB.
>>> TCP's snd_buf and rcv_buf max size is configured by net.ipv4.tcp_w/rmem[2]
>>> whose defalut value is 4MB or 6MB, is much larger than SMC-R's upper
>>> boundary.
>>>
>>> In some scenarios, such as Recommendation System, the communication
>>> pattern is mainly large size send/recv, where the size of snd_buf and
>>> rcv_buf greatly affects performance. Due to the upper boundary
>>> disadvantage, SMC-R performs poor than TCP in those scenarios. So it
>>> is time to enlarge the upper boundary size of SMC-R's snd_buf and rcv_buf,
>>> so that the SMC-R's snd_buf and rcv_buf can be configured to larger size
>>> for performance gain in such scenarios.
>>>
>>> The SMC-R rcv_buf's size will be transferred to peer by the field
>>> rmbe_size in clc accept and confirm message. The length of the field
>>> rmbe_size is four bits, which means the maximum value of SMCR_RMBE_SIZES
>>> is 15. In case of frequently adjusting the value of SMCR_RMBE_SIZES
>>> in different scenarios, set the value of SMCR_RMBE_SIZES to the maximum
>>> value 15, which means the upper boundary of SMC-R's snd_buf and rcv_buf
>>> is 512MB. As the real memory usage is determined by the value of
>>> net.smc.w/rmem, not by the upper boundary, set the value of SMCR_RMBE_SIZES
>>> to the maximum value has no side affects.
>>>
>> Hi Guangguan,
>>
>> That is correct that the maximum buffer(snd_buf and rcv_buf) size of SMCR is much smaller than TCP's. If I remember correctly, that was because the 512KB was enough for the traffic and did not waist memory space after some experiment. Sure, that was years ago, and it could be very different nowadays. But I'm still curious if you have any concrete scenario with performance benchmark which shows the distinguish disadvantage of the current maximum buffer size.
>>
>
> Hi Wenjia,
>
> The performance benchmark can be "Wide & Deep Recommender Model Training in TensorFlow" (https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Recommendation/WideAndDeep).
> The related paper here: https://arxiv.org/pdf/1606.07792.
>
> The performance unit is steps/s, where a higher value indicates better performance.
>
> 1) using 512KB snd_buf/recv_buf for SMC-R, default(4MB snd_buf/6MB recv_buf) for TCP:
> SMC-R performance vs TCP performance = 24.21 steps/s vs 24.85 steps/s
>
> ps smcr stat:
> RX Stats
> Data transmitted (Bytes) 37600503985 (37.60G)
> Total requests 677841
> Buffer full 40074 (5.91%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 4 0
> Reqs 178.2K 12.69K 8.125K 45.71K 23.51K 20.75K 60.16K 0
> TX Stats
> Data transmitted (Bytes) 118471581684 (118.5G)
> Total requests 874395
> Buffer full 343080 (39.24%)
> Buffer full (remote) 468523 (53.58%)
> Buffer too small 607914 (69.52%)
> Buffer too small (remote) 607914 (69.52%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 4 0
> Reqs 119.7K 3.169K 2.662K 5.583K 8.523K 21.55K 34.58K 318.0K
>
> worker smcr stat:
> RX Stats
> Data transmitted (Bytes) 118471581723 (118.5G)
> Total requests 835959
> Buffer full 99227 (11.87%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 4 0
> Reqs 125.4K 13.14K 17.49K 16.78K 34.27K 34.12K 223.8K 0
> TX Stats
> Data transmitted (Bytes) 37600504139 (37.60G)
> Total requests 606822
> Buffer full 86597 (14.27%)
> Buffer full (remote) 156098 (25.72%)
> Buffer too small 154218 (25.41%)
> Buffer too small (remote) 154218 (25.41%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 4 0
> Reqs 323.6K 13.26K 6.979K 50.84K 19.43K 14.46K 8.231K 81.80K
>
> 2) using 4MB snd_buf and 6MB recv_buf for SMC-R, default(4MB snd_buf/6MB recv_buf) for TCP:
> SMC-R performance vs TCP performance = 29.35 steps/s vs 24.85 steps/s
>
> ps smcr stat:
> RX Stats
> Data transmitted (Bytes) 110853495554 (110.9G)
> Total requests 1165230
> Buffer full 0 (0.00%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 0 4
> Reqs 340.2K 29.65K 19.58K 76.32K 55.37K 39.15K 7.042K 43.88K
> TX Stats
> Data transmitted (Bytes) 349072090590 (349.1G)
> Total requests 922705
> Buffer full 154765 (16.77%)
> Buffer full (remote) 309940 (33.59%)
> Buffer too small 46896 (5.08%)
> Buffer too small (remote) 14304 (1.55%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 0 4
> Reqs 420.8K 11.15K 3.609K 12.28K 13.05K 26.08K 22.13K 240.3K
>
> worker smcr stat:
> RX Stats
> Data transmitted (Bytes) 349072090590 (349.1G)
> Total requests 585165
> Buffer full 0 (0.00%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 0 4
> Reqs 155.4K 13.42K 4.070K 4.462K 3.628K 9.720K 12.01K 165.0K
> TX Stats
> Data transmitted (Bytes) 110854684711 (110.9G)
> Total requests 1052628
> Buffer full 34760 (3.30%)
> Buffer full (remote) 77630 (7.37%)
> Buffer too small 22330 (2.12%)
> Buffer too small (remote) 7040 (0.67%)
> 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB
> Bufs 0 0 0 0 0 0 0 4
> Reqs 666.3K 38.43K 20.65K 135.1K 54.19K 36.69K 3.948K 56.42K
>
>
> From the above smcr stat, we can see quantities send/recv with large size more than 512KB, and quantities send blocked due to
> buffer full or buffer too small. And when configured with larger send/recv buffer, we get less send block and better performance.
>
That is exactly what I asked for, thank you for the details! Please give
me some days to try by ourselves. If the performance is also significant
as yours and no other side effect, why not?!
Powered by blists - more mailing lists