lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67e17365461e3_2f6623294ea@willemb.c.googlers.com.notmuch>
Date: Mon, 24 Mar 2025 10:59:49 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>, 
 Willem de Bruijn <willemdebruijn.kernel@...il.com>, 
 "David S. Miller" <davem@...emloft.net>, 
 David Ahern <dsahern@...nel.org>, 
 Eric Dumazet <edumazet@...gle.com>, 
 Jakub Kicinski <kuba@...nel.org>, 
 Paolo Abeni <pabeni@...hat.com>
Cc: Simon Horman <horms@...nel.org>, 
 Kuniyuki Iwashima <kuniyu@...zon.com>, 
 Kuniyuki Iwashima <kuni1840@...il.com>, 
 netdev@...r.kernel.org
Subject: Re: [PATCH v1 net 1/3] udp: Fix multiple wraparounds of
 sk->sk_rmem_alloc.

Kuniyuki Iwashima wrote:
> __udp_enqueue_schedule_skb() has the following condition:
> 
>   if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)
>           goto drop;
> 
> sk->sk_rcvbuf is initialised by net.core.rmem_default and later can
> be configured by SO_RCVBUF, which is limited by net.core.rmem_max,
> or SO_RCVBUFFORCE.
> 
> If we set INT_MAX to sk->sk_rcvbuf, the condition is always false
> as sk->sk_rmem_alloc is also signed int.
> 
> Then, the size of the incoming skb is added to sk->sk_rmem_alloc
> unconditionally.
> 
> This results in integer overflow (possibly multiple times) on
> sk->sk_rmem_alloc and allows a single socket to have skb up to
> net.core.udp_mem[1].
> 
> For example, if we set a large value to udp_mem[1] and INT_MAX to
> sk->sk_rcvbuf and flood packets to the socket, we can see multiple
> overflows:
> 
>   # cat /proc/net/sockstat | grep UDP:
>   UDP: inuse 3 mem 7956736  <-- (7956736 << 12) bytes > INT_MAX * 15
>                                              ^- PAGE_SHIFT
>   # ss -uam
>   State  Recv-Q      ...
>   UNCONN -1757018048 ...    <-- flipping the sign repeatedly
>          skmem:(r2537949248,rb2147483646,t0,tb212992,f1984,w0,o0,bl0,d0)
> 
> Previously, we had a boundary check for INT_MAX, which was removed by
> commit 6a1f12dd85a8 ("udp: relax atomic operation on sk->sk_rmem_alloc").
> 
> A complete fix would be to revert it and cap the right operand by
> INT_MAX:
> 
>   rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
>   if (rmem > min(size + (unsigned int)sk->sk_rcvbuf, INT_MAX))
>           goto uncharge_drop;
> 
> but we do not want to add the expensive atomic_add_return() back just
> for the corner case.
> 
> So, let's perform the first check as unsigned int to detect the
> integer overflow.
> 
> Note that we still allow a single wraparound, which can be observed
> from userspace, but it's acceptable considering it's unlikely that
> no recv() is called for a long period, and the negative value will
> soon flip back to positive after a few recv() calls.

Can we do better than this?
Is this because of the "Always allow at least one packet" below, and
due to testing the value of the counter without skb->truesize added?

        /* Immediately drop when the receive queue is full.
         * Always allow at least one packet.
         */
        rmem = atomic_read(&sk->sk_rmem_alloc);
        rcvbuf = READ_ONCE(sk->sk_rcvbuf);
        if (rmem > rcvbuf)
                goto drop;

> 
>   # cat /proc/net/sockstat | grep UDP:
>   UDP: inuse 3 mem 524288  <-- (INT_MAX + 1) >> 12
> 
>   # ss -uam
>   State  Recv-Q      ...
>   UNCONN -2147482816 ...   <-- INT_MAX + 831 bytes
>          skmem:(r2147484480,rb2147483646,t0,tb212992,f3264,w0,o0,bl0,d14468947)
> 
> Fixes: 6a1f12dd85a8 ("udp: relax atomic operation on sk->sk_rmem_alloc")
> Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.com>
> ---
>  net/ipv4/udp.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index a9bb9ce5438e..a1e60aab29b5 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1735,7 +1735,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
>  	 */
>  	rmem = atomic_read(&sk->sk_rmem_alloc);
>  	rcvbuf = READ_ONCE(sk->sk_rcvbuf);
> -	if (rmem > rcvbuf)
> +	if ((unsigned int)rmem > rcvbuf)
>  		goto drop;
>  
>  	/* Under mem pressure, it might be helpful to help udp_recvmsg()
> -- 
> 2.48.1
> 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ