lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Oct 2022 13:57:30 -0700
From:   Kuniyuki Iwashima <kuniyu@...zon.com>
To:     <luwei32@...wei.com>
CC:     <asml.silence@...il.com>, <ast@...nel.org>, <davem@...emloft.net>,
        <dsahern@...nel.org>, <edumazet@...gle.com>,
        <imagedong@...cent.com>, <kuba@...nel.org>, <kuniyu@...zon.com>,
        <linux-kernel@...r.kernel.org>, <martin.lau@...nel.org>,
        <ncardwell@...gle.com>, <netdev@...r.kernel.org>,
        <pabeni@...hat.com>, <yoshfuji@...ux-ipv6.org>
Subject: Re: [PATCH -next,v2] tcp: fix a signed-integer-overflow bug in tcp_add_backlog()

Hi,

The subject should be

  [PATCH net v2] tcp: ....

so that this patch will be backported to the stable tree.


From:   Lu Wei <luwei32@...wei.com>
Date:   Thu, 20 Oct 2022 22:32:01 +0800
> The type of sk_rcvbuf and sk_sndbuf in struct sock is int, and
> in tcp_add_backlog(), the variable limit is caculated by adding
> sk_rcvbuf, sk_sndbuf and 64 * 1024, it may exceed the max value
> of int and overflow. This patch limits sk_rcvbuf and sk_sndbuf
> to 0x7fff000 and transfers them to u32 to avoid signed-integer
> overflow.
> 
> Fixes: c9c3321257e1 ("tcp: add tcp_add_backlog()")
> Signed-off-by: Lu Wei <luwei32@...wei.com>
> ---
>  include/net/sock.h  |  5 +++++
>  net/core/sock.c     | 10 ++++++----
>  net/ipv4/tcp_ipv4.c |  3 ++-
>  3 files changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 9e464f6409a7..cc2d6c4047c2 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -2529,6 +2529,11 @@ static inline void sk_wake_async(const struct sock *sk, int how, int band)
>  #define SOCK_MIN_SNDBUF		(TCP_SKB_MIN_TRUESIZE * 2)
>  #define SOCK_MIN_RCVBUF		 TCP_SKB_MIN_TRUESIZE
>  
> +/* limit sk_sndbuf and sk_rcvbuf to 0x7fff0000 to prevent overflow
> + * when adding sk_sndbuf, sk_rcvbuf and 64K in tcp_add_backlog()
> + */
> +#define SOCK_MAX_SNDRCVBUF		(INT_MAX - 0xFFFF)

Should we apply this limit in tcp_rcv_space_adjust() ?

	int rcvmem, rcvbuf;
	...
	rcvbuf = min_t(u64, rcvwin * rcvmem,
		       READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
	if (rcvbuf > sk->sk_rcvbuf) {
		WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
	...
	}

We still have 64K space if sk_rcvbuf were INT_MAX here though.


> +
>  static inline void sk_stream_moderate_sndbuf(struct sock *sk)
>  {
>  	u32 val;
> diff --git a/net/core/sock.c b/net/core/sock.c
> index a3ba0358c77c..33acc5e71100 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -950,7 +950,7 @@ static void __sock_set_rcvbuf(struct sock *sk, int val)
>  	/* Ensure val * 2 fits into an int, to prevent max_t() from treating it
>  	 * as a negative value.
>  	 */
> -	val = min_t(int, val, INT_MAX / 2);
> +	val = min_t(int, val, SOCK_MAX_SNDRCVBUF / 2);
>  	sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
>  
>  	/* We double it on the way in to account for "struct sk_buff" etc.
> @@ -1142,7 +1142,7 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
>  		/* Ensure val * 2 fits into an int, to prevent max_t()
>  		 * from treating it as a negative value.
>  		 */
> -		val = min_t(int, val, INT_MAX / 2);
> +		val = min_t(int, val, SOCK_MAX_SNDRCVBUF / 2);
>  		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
>  		WRITE_ONCE(sk->sk_sndbuf,
>  			   max_t(int, val * 2, SOCK_MIN_SNDBUF));
> @@ -3365,8 +3365,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
>  	timer_setup(&sk->sk_timer, NULL, 0);
>  
>  	sk->sk_allocation	=	GFP_KERNEL;
> -	sk->sk_rcvbuf		=	READ_ONCE(sysctl_rmem_default);
> -	sk->sk_sndbuf		=	READ_ONCE(sysctl_wmem_default);
> +	sk->sk_rcvbuf		=	min_t(int, SOCK_MAX_SNDRCVBUF,
> +					      READ_ONCE(sysctl_rmem_default));
> +	sk->sk_sndbuf		=	min_t(int, SOCK_MAX_SNDRCVBUF,
> +					      READ_ONCE(sysctl_wmem_default));
>  	sk->sk_state		=	TCP_CLOSE;
>  	sk_set_socket(sk, sock);
>  
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index 7a250ef9d1b7..5340733336a6 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -1878,7 +1878,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
>  	 * to reduce memory overhead, so add a little headroom here.
>  	 * Few sockets backlog are possibly concurrently non empty.
>  	 */
> -	limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
> +	limit = (u32)READ_ONCE(sk->sk_rcvbuf) +
> +		(u32)READ_ONCE(sk->sk_sndbuf) + 64*1024;

nit: s/64*1024/64 * 1024/

$ git show --format=email | ./scripts/checkpatch.pl
CHECK: spaces preferred around that '*' (ctx:VxV)
#79: FILE: net/ipv4/tcp_ipv4.c:1882:
+		(u32)READ_ONCE(sk->sk_sndbuf) + 64*1024;
 		                                  ^


>  
>  	if (unlikely(sk_add_backlog(sk, skb, limit))) {
>  		bh_unlock_sock(sk);
> -- 
> 2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ