lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16d3a8ee633ffe9c452037cf2e4cc0fca04b1dfb.camel@redhat.com>
Date: Wed, 14 Jun 2023 11:16:01 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Mike Freemon <mfreemon@...udflare.com>, netdev@...r.kernel.org
Cc: kernel-team@...udflare.com, edumazet@...gle.com, ncardwell@...gle.com
Subject: Re: [PATCH net-next v5] tcp: enforce receive buffer memory limits
 by allowing the tcp window to shrink

On Sun, 2023-06-11 at 22:05 -0500, Mike Freemon wrote:
> From: "mfreemon@...udflare.com" <mfreemon@...udflare.com>
> 
> Under certain circumstances, the tcp receive buffer memory limit
> set by autotuning (sk_rcvbuf) is increased due to incoming data
> packets as a result of the window not closing when it should be.
> This can result in the receive buffer growing all the way up to
> tcp_rmem[2], even for tcp sessions with a low BDP.
> 
> To reproduce:  Connect a TCP session with the receiver doing
> nothing and the sender sending small packets (an infinite loop
> of socket send() with 4 bytes of payload with a sleep of 1 ms
> in between each send()).  This will cause the tcp receive buffer
> to grow all the way up to tcp_rmem[2].
> 
> As a result, a host can have individual tcp sessions with receive
> buffers of size tcp_rmem[2], and the host itself can reach tcp_mem
> limits, causing the host to go into tcp memory pressure mode.
> 
> The fundamental issue is the relationship between the granularity
> of the window scaling factor and the number of byte ACKed back
> to the sender.  This problem has previously been identified in
> RFC 7323, appendix F [1].
> 
> The Linux kernel currently adheres to never shrinking the window.
> 
> In addition to the overallocation of memory mentioned above, the
> current behavior is functionally incorrect, because once tcp_rmem[2]
> is reached when no remediations remain (i.e. tcp collapse fails to
> free up any more memory and there are no packets to prune from the
> out-of-order queue), the receiver will drop in-window packets
> resulting in retransmissions and an eventual timeout of the tcp
> session.  A receive buffer full condition should instead result
> in a zero window and an indefinite wait.
> 
> In practice, this problem is largely hidden for most flows.  It
> is not applicable to mice flows.  Elephant flows can send data
> fast enough to "overrun" the sk_rcvbuf limit (in a single ACK),
> triggering a zero window.
> 
> But this problem does show up for other types of flows.  Examples
> are websockets and other type of flows that send small amounts of
> data spaced apart slightly in time.  In these cases, we directly
> encounter the problem described in [1].
> 
> RFC 7323, section 2.4 [2], says there are instances when a retracted
> window can be offered, and that TCP implementations MUST ensure
> that they handle a shrinking window, as specified in RFC 1122,
> section 4.2.2.16 [3].  All prior RFCs on the topic of tcp window
> management have made clear that sender must accept a shrunk window
> from the receiver, including RFC 793 [4] and RFC 1323 [5].
> 
> This patch implements the functionality to shrink the tcp window
> when necessary to keep the right edge within the memory limit by
> autotuning (sk_rcvbuf).  This new functionality is enabled with
> the new sysctl: net.ipv4.tcp_shrink_window

Perhaps worth to note somewhere that the new knob fills an existing
hole in netns_ipv4.
> 
> Additional information can be found at:
> https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/
> 
> [1] https://www.rfc-editor.org/rfc/rfc7323#appendix-F
> [2] https://www.rfc-editor.org/rfc/rfc7323#section-2.4
> [3] https://www.rfc-editor.org/rfc/rfc1122#page-91
> [4] https://www.rfc-editor.org/rfc/rfc793
> [5] https://www.rfc-editor.org/rfc/rfc1323
> 
> Signed-off-by: Mike Freemon <mfreemon@...udflare.com>
> ---

Including a changelog after the '---' separator would help the
reviewer, please add it to the next revisions, if any.

>  Documentation/networking/ip-sysctl.rst | 15 +++++++
>  include/net/netns/ipv4.h               |  1 +
>  net/ipv4/sysctl_net_ipv4.c             |  9 ++++
>  net/ipv4/tcp_ipv4.c                    |  2 +
>  net/ipv4/tcp_output.c                  | 60 ++++++++++++++++++++++----
>  5 files changed, 78 insertions(+), 9 deletions(-)
> 
> diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
> index 366e2a5097d9..4a010a7cde7f 100644
> --- a/Documentation/networking/ip-sysctl.rst
> +++ b/Documentation/networking/ip-sysctl.rst
> @@ -981,6 +981,21 @@ tcp_tw_reuse - INTEGER
>  tcp_window_scaling - BOOLEAN
>  	Enable window scaling as defined in RFC1323.
>  
> +tcp_shrink_window - BOOLEAN
> +	This changes how the TCP receive window is calculated.
> +
> +	RFC 7323, section 2.4, says there are instances when a retracted
> +	window can be offered, and that TCP implementations MUST ensure
> +	that they handle a shrinking window, as specified in RFC 1122.
> +
> +	- 0 - Disabled.	The window is never shrunk.
> +	- 1 - Enabled.	The window is shrunk when necessary to remain within
> +			the memory limit set by autotuning (sk_rcvbuf).
> +			This only occurs if a non-zero receive window
> +			scaling factor is also in effect.
> +
> +	Default: 0
> +
>  tcp_wmem - vector of 3 INTEGERs: min, default, max
>  	min: Amount of memory reserved for send buffers for TCP sockets.
>  	Each TCP socket has rights to use it due to fact of its birth.
> diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
> index a4efb7a2796c..f00374718159 100644
> --- a/include/net/netns/ipv4.h
> +++ b/include/net/netns/ipv4.h
> @@ -65,6 +65,7 @@ struct netns_ipv4 {
>  #endif
>  	bool			fib_has_custom_local_routes;
>  	bool			fib_offload_disabled;
> +	u8			sysctl_tcp_shrink_window;
>  #ifdef CONFIG_IP_ROUTE_CLASSID
>  	atomic_t		fib_num_tclassid_users;
>  #endif
> diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
> index 356afe54951c..2afb0870648b 100644
> --- a/net/ipv4/sysctl_net_ipv4.c
> +++ b/net/ipv4/sysctl_net_ipv4.c
> @@ -1480,6 +1480,15 @@ static struct ctl_table ipv4_net_table[] = {
>  		.extra1		= SYSCTL_ZERO,
>  		.extra2		= &tcp_syn_linear_timeouts_max,
>  	},
> +	{
> +		.procname	= "tcp_shrink_window",
> +		.data		= &init_net.ipv4.sysctl_tcp_shrink_window,
> +		.maxlen		= sizeof(u8),
> +		.mode		= 0644,
> +		.proc_handler	= proc_dou8vec_minmax,
> +		.extra1		= SYSCTL_ZERO,
> +		.extra2		= SYSCTL_ONE,
> +	},
>  	{ }
>  };
>  
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index 84a5d557dc1a..9213804b034f 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -3281,6 +3281,8 @@ static int __net_init tcp_sk_init(struct net *net)
>  		net->ipv4.tcp_congestion_control = &tcp_reno;
>  
>  	net->ipv4.sysctl_tcp_syn_linear_timeouts = 4;
> +	net->ipv4.sysctl_tcp_shrink_window = 0;
> +
>  	return 0;
>  }
>  
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> index f8ce77ce7c3e..5784f8a99381 100644
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -260,8 +260,8 @@ static u16 tcp_select_window(struct sock *sk)
>  	u32 old_win = tp->rcv_wnd;
>  	u32 cur_win = tcp_receive_window(tp);
>  	u32 new_win = __tcp_select_window(sk);
> +	struct net *net = sock_net(sk);
>  
> -	/* Never shrink the offered window */
>  	if (new_win < cur_win) {
>  		/* Danger Will Robinson!
>  		 * Don't update rcv_wup/rcv_wnd here or else
> @@ -270,11 +270,14 @@ static u16 tcp_select_window(struct sock *sk)
>  		 *
>  		 * Relax Will Robinson.
>  		 */
> -		if (new_win == 0)
> -			NET_INC_STATS(sock_net(sk),
> -				      LINUX_MIB_TCPWANTZEROWINDOWADV);
> -		new_win = ALIGN(cur_win, 1 << tp->rx_opt.rcv_wscale);
> +		if (!READ_ONCE(net->ipv4.sysctl_tcp_shrink_window) || !tp->rx_opt.rcv_wscale) {
> +			/* Never shrink the offered window */
> +			if (new_win == 0)
> +				NET_INC_STATS(net, LINUX_MIB_TCPWANTZEROWINDOWADV);
> +			new_win = ALIGN(cur_win, 1 << tp->rx_opt.rcv_wscale);
> +		}
>  	}
> +
>  	tp->rcv_wnd = new_win;
>  	tp->rcv_wup = tp->rcv_nxt;
>  
> @@ -282,7 +285,7 @@ static u16 tcp_select_window(struct sock *sk)
>  	 * scaled window.
>  	 */
>  	if (!tp->rx_opt.rcv_wscale &&
> -	    READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows))
> +	    READ_ONCE(net->ipv4.sysctl_tcp_workaround_signed_windows))
>  		new_win = min(new_win, MAX_TCP_WINDOW);
>  	else
>  		new_win = min(new_win, (65535U << tp->rx_opt.rcv_wscale));
> @@ -294,10 +297,9 @@ static u16 tcp_select_window(struct sock *sk)
>  	if (new_win == 0) {
>  		tp->pred_flags = 0;
>  		if (old_win)
> -			NET_INC_STATS(sock_net(sk),
> -				      LINUX_MIB_TCPTOZEROWINDOWADV);
> +			NET_INC_STATS(net, LINUX_MIB_TCPTOZEROWINDOWADV);
>  	} else if (old_win == 0) {
> -		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFROMZEROWINDOWADV);
> +		NET_INC_STATS(net, LINUX_MIB_TCPFROMZEROWINDOWADV);
>  	}
>  
>  	return new_win;
> @@ -3003,6 +3005,7 @@ u32 __tcp_select_window(struct sock *sk)
>  {
>  	struct inet_connection_sock *icsk = inet_csk(sk);
>  	struct tcp_sock *tp = tcp_sk(sk);
> +	struct net *net = sock_net(sk);
>  	/* MSS for the peer's data.  Previous versions used mss_clamp
>  	 * here.  I don't know if the value based on our guesses
>  	 * of peer's MSS is better for the performance.  It's more correct
> @@ -3024,6 +3027,15 @@ u32 __tcp_select_window(struct sock *sk)
>  		if (mss <= 0)
>  			return 0;
>  	}
> +
> +	/* Only allow window shrink if the sysctl is enabled and we have
> +	 * a non-zero scaling factor in effect.
> +	 */
> +	if (READ_ONCE(net->ipv4.sysctl_tcp_shrink_window) && tp->rx_opt.rcv_wscale)

I'm wondering if an unlikely() here is worthy.


Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ