[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3f549b4f1402ea17d56c292d3a1f85be3e2b7d89.camel@redhat.com>
Date: Fri, 24 Nov 2023 08:54:00 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Neil Spring <ntspring@...a.com>, Eric Dumazet <edumazet@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>, Wei Wang <weiwan@...gle.com>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "David S. Miller"
<davem@...emloft.net>, David Ahern <dsahern@...nel.org>, Jakub Kicinski
<kuba@...nel.org>, David Gibson <david@...son.dropbear.id.au>
Subject: Re: [PATCH net] tcp: fix mid stream window clamp.
On Fri, 2023-11-24 at 05:27 +0000, Neil Spring wrote:
> >
> > ________________________________________
> > From: Paolo Abeni <pabeni@...hat.com>
> > Sent: Thursday, November 23, 2023 10:16 AM
> > To: Eric Dumazet; Neal Cardwell; Wei Wang
> > Cc: netdev@...r.kernel.org; David S. Miller; David Ahern; Jakub
> > Kicinski; Neil Spring; David Gibson
> > Subject: Re: [PATCH net] tcp: fix mid stream window clamp.
> >
> > !------------------------------------------------------------------
> > -|
> > This Message Is From an External Sender
> >
> > > -----------------------------------------------------------------
> > > --!
> >
> > On Thu, 2023-11-23 at 18:10 +0100, Eric Dumazet wrote:
> > > CC Neal and Wei
> > >
> > > On Thu, Nov 23, 2023 at 4:25 PM Paolo Abeni <pabeni@...hat.com>
> > > wrote:
> > > >
> > > > After the blamed commit below, if the user-space application
> > > > performs
> > > > window clamping when tp->rcv_wnd is 0, the TCP socket will
> > > > never be
> > > > able to announce a non 0 receive window, even after completely
> > > > emptying
> > > > the receive buffer and re-setting the window clamp to higher
> > > > values.
> > > >
> > > > Refactor tcp_set_window_clamp() to address the issue: when the
> > > > user
> > > > decreases the current clamp value, set rcv_ssthresh according
> > > > to the
> > > > same logic used at buffer initialization time.
> > > > When increasing the clamp value, give the rcv_ssthresh a chance
> > > > to grow
> > > > according to previously implemented heuristic.
> > > >
> > > > Fixes: 3aa7857fe1d7 ("tcp: enable mid stream window clamp")
> > > > Reported-by: David Gibson <david@...son.dropbear.id.au>
> > > > Reported-by: Stefano Brivio <sbrivio@...hat.com>
> > > > Reviewed-by: Stefano Brivio <sbrivio@...hat.com>
> > > > Tested-by: Stefano Brivio <sbrivio@...hat.com>
> > > > Signed-off-by: Paolo Abeni <pabeni@...hat.com>
> > > > ---
> > > > net/ipv4/tcp.c | 19 ++++++++++++++++---
> > > > 1 file changed, 16 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > > > index 53bcc17c91e4..1a9b9064e080 100644
> > > > --- a/net/ipv4/tcp.c
> > > > +++ b/net/ipv4/tcp.c
> > > > @@ -3368,9 +3368,22 @@ int tcp_set_window_clamp(struct sock
> > > > *sk, int val)
> > > > return -EINVAL;
> > > > tp->window_clamp = 0;
> > > > } else {
> > > > - tp->window_clamp = val < SOCK_MIN_RCVBUF / 2 ?
> > > > - SOCK_MIN_RCVBUF / 2 : val;
> > > > - tp->rcv_ssthresh = min(tp->rcv_wnd, tp-
> > > > >window_clamp);
> > > > + u32 new_rcv_ssthresh, old_window_clamp = tp-
> > > > >window_clamp;
> > > > + u32 new_window_clamp = val < SOCK_MIN_RCVBUF /
> > > > 2 ?
> > > > + SOCK_MIN_RCVBUF
> > > > / 2 : val;
> > > > +
> > > > + if (new_window_clamp == old_window_clamp)
> > > > + return 0;
> > > > +
> > > > + tp->window_clamp = new_window_clamp;
> > > > + if (new_window_clamp < old_window_clamp) {
> > > > + tp->rcv_ssthresh = min(tp-
> > > > >rcv_ssthresh,
> > > > +
> > > > new_window_clamp);
> > > > + } else {
> > > > + new_rcv_ssthresh = min(tp->rcv_wnd, tp-
> > > > >window_clamp);
> > > > + tp->rcv_ssthresh =
> > > > max(new_rcv_ssthresh,
> > > > + tp-
> > > > >rcv_ssthresh);
> > > > + }
> > > > }
> > > > return 0;
> > > > }
> > >
> > > It seems there is no provision for SO_RESERVE_MEM
> >
> > Indeed I did take that in account.
> >
> > > I wonder if tcp_adjust_rcv_ssthresh() could help here ?
> >
> > I don't know how to fit it into the above.
> > tcp_adjust_rcv_ssthresh()
> > tends to shrink rcv_ssthresh to low values when no memory is
> > reserved.
> >
> > Dealing directly with SO_RESERVE_MEM when shrinking the threshold
> > feels
> > easier to me, something alike:
> >
> > if (new_window_clamp == old_window_clamp)
> > return 0;
> >
> > tp->window_clamp = new_window_clamp;
> > if (new_window_clamp < old_window_clamp) {
> > int unused_mem =
> > sk_unused_reserved_mem(sk);
> >
> > tp->rcv_ssthresh = min(tp->rcv_ssthresh,
> > new_window_clamp);
> >
> > if (unused_mem)
> > tp->rcv_ssthresh = max_t(u32, tp-
> > >rcv_ssthresh,
> > tcp_win_from_space(sk,
> > unused_mem));
> > } else {
> > new_rcv_ssthresh = min(tp->rcv_wnd, tp-
> > >window_clamp);
> > tp->rcv_ssthresh = max(new_rcv_ssthresh,
> > tp->rcv_ssthresh);
> > }
> >
> > Possibly the bits shared with tcp_adjust_rcv_ssthresh() could be
> > factored out in a common helper.
> >
> > > Have you considered reverting 3aa7857fe1d7 ("tcp: enable mid
> > > stream
> > > window clamp") ?
> >
> >
> > That would work, too and will be simpler.
> >
> > The issue at hand was noted with an application that really wants
> > to
> > limit the announced window:
> >
> > https://gitlab.com/dgibson/passt
> >
> > I guess touching rcv_ssthresh would be a bit more effective.
> >
> > Not much more in the end, as both window_clamp and rcv_ssthresh can
> > later grow due to rcv buf auto-tune. Ideally we would like to
> > prevent
> > tcp_rcv_space_adjust() from touching window_clamp after
> > TCP_WINDOW_CLAMP - but that is another matter/patch.
> >
> > Thanks!
> >
> > Paolo
> >
>
> The patch to fix the bug where rcv_sshthresh is reduced to zero on a
> full receive window and cannot recover is:
> -tp->rcv_ssthresh = min(tp->rcv_wnd, tp->window_clamp);
> +tp->rcv_ssthresh = min(tp->rcv_ssthresh, tp->window_clamp);
FTR I considered something similar to the above, but I opted for the
present patch, as the above does not pass the pktdrill suggested by
Eric here:
https://lore.kernel.org/netdev/6070816e-f7d2-725a-ec10-9d85f15455a2@gmail.com/
Cheers,
Paolo
Powered by blists - more mailing lists