lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 20 Oct 2022 16:15:40 -0700
From:   Eric Dumazet <edumazet@...gle.com>
To:     Kuniyuki Iwashima <kuniyu@...zon.com>
Cc:     luwei32@...wei.com, asml.silence@...il.com, ast@...nel.org,
        davem@...emloft.net, dsahern@...nel.org, imagedong@...cent.com,
        kuba@...nel.org, linux-kernel@...r.kernel.org,
        martin.lau@...nel.org, ncardwell@...gle.com,
        netdev@...r.kernel.org, pabeni@...hat.com, yoshfuji@...ux-ipv6.org
Subject: Re: [PATCH -next,v2] tcp: fix a signed-integer-overflow bug in tcp_add_backlog()

On Thu, Oct 20, 2022 at 1:57 PM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
>
> Hi,
>
> The subject should be
>
>   [PATCH net v2] tcp: ....
>
> so that this patch will be backported to the stable tree.
>
>
> From:   Lu Wei <luwei32@...wei.com>
> Date:   Thu, 20 Oct 2022 22:32:01 +0800
> > The type of sk_rcvbuf and sk_sndbuf in struct sock is int, and
> > in tcp_add_backlog(), the variable limit is caculated by adding
> > sk_rcvbuf, sk_sndbuf and 64 * 1024, it may exceed the max value
> > of int and overflow. This patch limits sk_rcvbuf and sk_sndbuf
> > to 0x7fff000 and transfers them to u32 to avoid signed-integer
> > overflow.
> >
> > Fixes: c9c3321257e1 ("tcp: add tcp_add_backlog()")
> > Signed-off-by: Lu Wei <luwei32@...wei.com>
> > ---
> >  include/net/sock.h  |  5 +++++
> >  net/core/sock.c     | 10 ++++++----
> >  net/ipv4/tcp_ipv4.c |  3 ++-
> >  3 files changed, 13 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/net/sock.h b/include/net/sock.h
> > index 9e464f6409a7..cc2d6c4047c2 100644
> > --- a/include/net/sock.h
> > +++ b/include/net/sock.h
> > @@ -2529,6 +2529,11 @@ static inline void sk_wake_async(const struct sock *sk, int how, int band)
> >  #define SOCK_MIN_SNDBUF              (TCP_SKB_MIN_TRUESIZE * 2)
> >  #define SOCK_MIN_RCVBUF               TCP_SKB_MIN_TRUESIZE
> >
> > +/* limit sk_sndbuf and sk_rcvbuf to 0x7fff0000 to prevent overflow
> > + * when adding sk_sndbuf, sk_rcvbuf and 64K in tcp_add_backlog()
> > + */
> > +#define SOCK_MAX_SNDRCVBUF           (INT_MAX - 0xFFFF)
>
> Should we apply this limit in tcp_rcv_space_adjust() ?
>
>         int rcvmem, rcvbuf;
>         ...
>         rcvbuf = min_t(u64, rcvwin * rcvmem,
>                        READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));
>         if (rcvbuf > sk->sk_rcvbuf) {
>                 WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);
>         ...
>         }
>
> We still have 64K space if sk_rcvbuf were INT_MAX here though.
>

Thinking more about this, I think we could solve the issue by reducing
the budget
we account for sndbuf.

ACK packets are much smaller than the payload.

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 6376ad91576546d48ffcc8ed9cdf8a1904679e33..4bbefb50fe472f69f3eaa1983539595b6fd2e9f4
100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1874,11 +1874,13 @@ bool tcp_add_backlog(struct sock *sk, struct
sk_buff *skb,
        __skb_push(skb, hdrlen);

 no_coalesce:
+       limit = READ_ONCE(sk->sk_rcvbuf) + (READ_ONCE(sk->sk_sndbuf) >> 1);
+
        /* Only socket owner can try to collapse/prune rx queues
         * to reduce memory overhead, so add a little headroom here.
         * Few sockets backlog are possibly concurrently non empty.
         */
-       limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;
+       limit += 64*1024;

        if (unlikely(sk_add_backlog(sk, skb, limit))) {
                bh_unlock_sock(sk);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ