[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iJmQc6r+Ajh3N1V3Q22iJ4C=Ldte5pBVd=jC-YTQYuQTA@mail.gmail.com>
Date: Mon, 27 Jan 2025 17:37:02 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jon Maloy <jmaloy@...hat.com>
Cc: Neal Cardwell <ncardwell@...gle.com>, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, passt-dev@...st.top, sbrivio@...hat.com, lvivier@...hat.com,
dgibson@...hat.com, eric.dumazet@...il.com,
Menglong Dong <menglong8.dong@...il.com>
Subject: Re: [net,v2] tcp: correct handling of extreme memory squeeze
On Fri, Jan 24, 2025 at 6:40 PM Jon Maloy <jmaloy@...hat.com> wrote:
>
>
>
> On 2025-01-20 11:22, Eric Dumazet wrote:
> > On Mon, Jan 20, 2025 at 5:10 PM Jon Maloy <jmaloy@...hat.com> wrote:
> >>
> >>
> >>
> >> On 2025-01-20 00:03, Jon Maloy wrote:
> >>>
> >>>
>
> [...]
>
> >>>> I agree with Eric that probably tp->pred_flags should be cleared, and
> >>>> a packetdrill test for this would be super-helpful.
> >>>
> >>> I must admit I have never used packetdrill, but I can make an effort.
> >>
> >> I hear from other sources that you cannot force a memory exhaustion with
> >> packetdrill anyway, so this sounds like a pointless exercise.
> >
> > We certainly can and should add a feature like that to packetdrill.
> >
> > Documentation/fault-injection/ has some relevant information.
> >
> > Even without this, tcp_try_rmem_schedule() is reading sk->sk_rcvbuf
> > that could be lowered by a packetdrill script I think.
> >
> Neal, Eric,
> How do you suggest we proceed with this?
> I downloaded packetdrill and tried it a bit, but to understand it well
> enough to introduce a new feature would require more time than I am
> able to spend on this. Maybe Neal, who I see is one of the contributors
> to packetdrill could help out?
>
> I can certainly clear tp->pred_flags and post it again, maybe with
> an improved and shortened log. Would that be acceptable?
Yes.
>
> I also made a run where I looked into why __tcp_select_window()
> ignores all the space that has been freed up:
>
>
> tcp_recvmsg_locked(->)
> __tcp_cleanup_rbuf(->) (copied 131072)
> tp->rcv_wup: 1788299855, tp->rcv_wnd: 5812224,
> tp->rcv_nxt 1793800175
> __tcp_select_window(->)
> tcp_space(->)
> tcp_space(<-) returning 458163
> free_space = round_down(458163, 1 << 4096) = 454656
> (free_space > tp->rcv_ssthresh) -->
> free_space = tp->rcv_ssthresh = 261920
> window = ALIGN(261920, 4096) = 26144
> __tcp_select_window(<-) returning 262144
> [rcv_win_now 311904, 2 * rcv_win_now 623808, new_window 262144]
> (new_window >= (2 * rcv_win_now)) ? --> time_to_ack 0
> NOT calling tcp_send_ack()
> __tcp_cleanup_rbuf(<-)
> [tp->rcv_wup 1788299855, tp->rcv_wnd 5812224,
> tp->rcv_nxt 1793800175]
> tcp_recvmsg_locked(<-) returning 131072 bytes.
> [tp->rcv_nxt 1793800175, tp->rcv_wnd 5812224,
> tp->rcv_wup 1788299855, sk->last_ack 0, tcp_receive_win() 311904,
> copied_seq 1788299855->1788395953 (96098), unread 5404222,
> sk_rcv_qlen 83, ofo_qlen 0]
>
>
> As we see tp->rcv_ssthresh is the limiting factor, causing
> a consistent situation where (new_window < (rcv_win_now * 2)),
> and even (new_window < rcv_win_now).
Your changelog could simply explain this, in one sentence. instead of
lengthy traces.
Powered by blists - more mailing lists