[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43b8a024-2ab8-157d-92c2-7367f632c659@chinatelecom.cn>
Date: Wed, 31 Aug 2022 15:19:34 +0800
From: Yonglong Li <liyonglong@...natelecom.cn>
To: Yuchung Cheng <ycheng@...gle.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, dsahern@...nel.org,
edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
Neal Cardwell <ncardwell@...gle.com>
Subject: Re: [PATCH] tcp: del skb from tsorted_sent_queue after mark it as
lost
On 8/31/2022 1:58 PM, Yuchung Cheng wrote:
> On Mon, Aug 29, 2022 at 5:23 PM Yuchung Cheng <ycheng@...gle.com> wrote:
>>
>> On Mon, Aug 29, 2022 at 1:21 AM Yonglong Li <liyonglong@...natelecom.cn> wrote:
>>>
>>> if rack is enabled, when skb marked as lost we can remove it from
>>> tsorted_sent_queue. It will reduces the iterations on tsorted_sent_queue
>>> in tcp_rack_detect_loss
>>
>> Did you test the case where an skb is marked lost again after
>> retransmission? I can't quite remember the reason I avoided this
>> optimization. let me run some test and get back to you.
> As I suspected, this patch fails to pass our packet drill tests.
>
> It breaks detecting retransmitted packets that
> get lost again, b/c they have already been removed from the tsorted
> list when they get lost the first time.
>
>
Hi Yuchung,
Thank you for your feelback.
But I am not quite understand. in the current implementation, if an skb
is marked lost again after retransmission, it will be added to tail of
tsorted_sent_queue again in tcp_update_skb_after_send.
Do I miss some code?
>
>>
>>
>>>
>>> Signed-off-by: Yonglong Li <liyonglong@...natelecom.cn>
>>> ---
>>> net/ipv4/tcp_input.c | 15 +++++++++------
>>> net/ipv4/tcp_recovery.c | 1 -
>>> 2 files changed, 9 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
>>> index ab5f0ea..01bd644 100644
>>> --- a/net/ipv4/tcp_input.c
>>> +++ b/net/ipv4/tcp_input.c
>>> @@ -1082,6 +1082,12 @@ static void tcp_notify_skb_loss_event(struct tcp_sock *tp, const struct sk_buff
>>> tp->lost += tcp_skb_pcount(skb);
>>> }
>>>
>>> +static bool tcp_is_rack(const struct sock *sk)
>>> +{
>>> + return READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) &
>>> + TCP_RACK_LOSS_DETECTION;
>>> +}
>>> +
>>> void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb)
>>> {
>>> __u8 sacked = TCP_SKB_CB(skb)->sacked;
>>> @@ -1105,6 +1111,9 @@ void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb)
>>> TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
>>> tcp_notify_skb_loss_event(tp, skb);
>>> }
>>> +
>>> + if (tcp_is_rack(sk))
>>> + list_del_init(&skb->tcp_tsorted_anchor);
>>> }
>>>
>>> /* Updates the delivered and delivered_ce counts */
>>> @@ -2093,12 +2102,6 @@ static inline void tcp_init_undo(struct tcp_sock *tp)
>>> tp->undo_retrans = tp->retrans_out ? : -1;
>>> }
>>>
>>> -static bool tcp_is_rack(const struct sock *sk)
>>> -{
>>> - return READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) &
>>> - TCP_RACK_LOSS_DETECTION;
>>> -}
>>> -
>>> /* If we detect SACK reneging, forget all SACK information
>>> * and reset tags completely, otherwise preserve SACKs. If receiver
>>> * dropped its ofo queue, we will know this due to reneging detection.
>>> diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c
>>> index 50abaa9..ba52ec9e 100644
>>> --- a/net/ipv4/tcp_recovery.c
>>> +++ b/net/ipv4/tcp_recovery.c
>>> @@ -84,7 +84,6 @@ static void tcp_rack_detect_loss(struct sock *sk, u32 *reo_timeout)
>>> remaining = tcp_rack_skb_timeout(tp, skb, reo_wnd);
>>> if (remaining <= 0) {
>>> tcp_mark_skb_lost(sk, skb);
>>> - list_del_init(&skb->tcp_tsorted_anchor);
>>> } else {
>>> /* Record maximum wait time */
>>> *reo_timeout = max_t(u32, *reo_timeout, remaining);
>>> --
>>> 1.8.3.1
>>>
>
--
Li YongLong
Powered by blists - more mailing lists