lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 10 Aug 2022 15:24:47 +0800
From:   Yonglong Li <liyonglong@...natelecom.cn>
To:     Neal Cardwell <ncardwell@...gle.com>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
        dsahern@...nel.org, kuba@...nel.org, pabeni@...hat.com,
        Yuchung Cheng <ycheng@...gle.com>
Subject: Re: [PATCH] tcp: adjust rcvbuff according copied rate of user space



On 8/10/2022 5:28 AM, Neal Cardwell wrote:
> On Tue, Aug 9, 2022 at 4:48 AM Yonglong Li <liyonglong@...natelecom.cn> wrote:
>>
>> it is more reasonable to adjust rcvbuff by copied rate instead
>> of copied buff len.
>>
>> Signed-off-by: Yonglong Li <liyonglong@...natelecom.cn>
>> ---
> 
> Thanks for sending out the patch. My sense is that this would need a
> more detailed commit description describing the algorithm change, the
> motivation for the change, and justifying the added complexity and
> state by showing some meaningful performance test results that
> demonstrate some improvement.
Hi Neal,

Thanks for your feedback. will add more detail in next version.

> 
>>  include/linux/tcp.h  |  1 +
>>  net/ipv4/tcp_input.c | 16 +++++++++++++---
>>  2 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/tcp.h b/include/linux/tcp.h
>> index a9fbe22..18e091c 100644
>> --- a/include/linux/tcp.h
>> +++ b/include/linux/tcp.h
>> @@ -410,6 +410,7 @@ struct tcp_sock {
>>                 u32     space;
>>                 u32     seq;
>>                 u64     time;
>> +               u32     prior_rate;
>>         } rcvq_space;
>>
>>  /* TCP-specific MTU probe information. */
>> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
>> index ab5f0ea..2bdf2a5 100644
>> --- a/net/ipv4/tcp_input.c
>> +++ b/net/ipv4/tcp_input.c
>> @@ -544,6 +544,7 @@ static void tcp_init_buffer_space(struct sock *sk)
>>         tcp_mstamp_refresh(tp);
>>         tp->rcvq_space.time = tp->tcp_mstamp;
>>         tp->rcvq_space.seq = tp->copied_seq;
>> +       tp->rcvq_space.prior_rate = 0;
>>
>>         maxwin = tcp_full_space(sk);
>>
>> @@ -701,6 +702,7 @@ static inline void tcp_rcv_rtt_measure_ts(struct sock *sk,
>>  void tcp_rcv_space_adjust(struct sock *sk)
>>  {
>>         struct tcp_sock *tp = tcp_sk(sk);
>> +       u64 pre_copied_rate, copied_rate;
>>         u32 copied;
>>         int time;
>>
>> @@ -713,7 +715,14 @@ void tcp_rcv_space_adjust(struct sock *sk)
>>
>>         /* Number of bytes copied to user in last RTT */
>>         copied = tp->copied_seq - tp->rcvq_space.seq;
>> -       if (copied <= tp->rcvq_space.space)
>> +       copied_rate = copied * USEC_PER_SEC;
>> +       do_div(copied_rate, time);
>> +       pre_copied_rate = tp->rcvq_space.prior_rate;
>> +       if (!tp->rcvq_space.prior_rate) {
>> +               pre_copied_rate = tp->rcvq_space.space * USEC_PER_SEC;
>> +               do_div(pre_copied_rate, time);
>> +       }
>> +       if (copied_rate <= pre_copied_rate || !pre_copied_rate)
>>                 goto new_measure;
>>
>>         /* A bit of theory :
>> @@ -736,8 +745,8 @@ void tcp_rcv_space_adjust(struct sock *sk)
>>                 rcvwin = ((u64)copied << 1) + 16 * tp->advmss;
>>
>>                 /* Accommodate for sender rate increase (eg. slow start) */
>> -               grow = rcvwin * (copied - tp->rcvq_space.space);
>> -               do_div(grow, tp->rcvq_space.space);
>> +               grow = rcvwin * (copied_rate - pre_copied_rate);
>> +               do_div(grow, pre_copied_rate);
>>                 rcvwin += (grow << 1);
>>
>>                 rcvmem = SKB_TRUESIZE(tp->advmss + MAX_TCP_HEADER);
>> @@ -755,6 +764,7 @@ void tcp_rcv_space_adjust(struct sock *sk)
>>                 }
>>         }
>>         tp->rcvq_space.space = copied;
>> +       tp->rcvq_space.prior_rate = pre_copied_rate;
> 
> Shouldn't that line be:
> 
>    tp->rcvq_space.prior_rate = copied_rate;
> 
> Otherwise, AFAICT the patch could preserve forever in
> tp->rcvq_space.prior_rate the very first rate that was computed, since
> the top of the patch does:
> 
>  +       pre_copied_rate = tp->rcvq_space.prior_rate;
> 
> and the bottom of the patch does:
> 
>  +       tp->rcvq_space.prior_rate = pre_copied_rate;
> 
sorry. I get a mistake...

> best regards,
> neal
> 

-- 
Li YongLong

Powered by blists - more mailing lists