[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240501165233.24657-1-kuniyu@amazon.com>
Date: Wed, 1 May 2024 09:52:33 -0700
From: Kuniyuki Iwashima <kuniyu@...zon.com>
To: <edumazet@...gle.com>
CC: <anderson@...elesecurity.com>, <kuniyu@...zon.com>,
<netdev@...r.kernel.org>
Subject: Re: use-after-free warnings in tcp_v4_connect() due to inet_twsk_hashdance() inserting the object into ehash table without initializing its reference counter
From: Eric Dumazet <edumazet@...gle.com>
Date: Wed, 1 May 2024 08:56:51 +0200
> On Wed, May 1, 2024 at 2:22 AM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
> >
> > +cc Eric
> >
> > From: Anderson Nascimento <anderson@...elesecurity.com>
> > Date: Tue, 30 Apr 2024 19:00:34 -0300
> > > Hello,
> >
> > Hi,
> >
> > Thanks for the detailed report.
> >
> > >
> > > There is a bug in inet_twsk_hashdance(). This function inserts a
> > > time-wait socket in the established hash table without initializing the
> > > object's reference counter, as seen below. The reference counter
> > > initialization is done after the object is added to the established hash
> > > table and the lock is released. Because of this, a sock_hold() in
> > > tcp_twsk_unique() and other operations on the object trigger warnings
> > > from the reference counter saturation mechanism. The warnings can also
> > > be seen below. They were triggered on Fedora 39 Linux kernel v6.8.
> > >
> > > The bug is triggered via a connect() system call on a TCP socket,
> > > reaching __inet_check_established() and then passing the time-wait
> > > socket to tcp_twsk_unique(). Other operations are also performed on the
> > > time-wait socket in __inet_check_established() before its reference
> > > counter is initialized correctly by inet_twsk_hashdance(). The fix seems
> > > to be to move the reference counter initialization inside the lock,
> >
> > or use refcount_inc_not_zero() and give up on reusing the port
> > under the race ?
> >
> > ---8<---
> > diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> > index 0427deca3e0e..637f4965326d 100644
> > --- a/net/ipv4/tcp_ipv4.c
> > +++ b/net/ipv4/tcp_ipv4.c
> > @@ -175,8 +175,13 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp)
> > tp->rx_opt.ts_recent = tcptw->tw_ts_recent;
> > tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp;
> > }
> > - sock_hold(sktw);
> > - return 1;
> > +
> > + /* Here, sk_refcnt could be 0 because inet_twsk_hashdance() puts
> > + * twsk into ehash and releases the bucket lock *before* setting
> > + * sk_refcnt. Then, give up on reusing the port.
> > + */
> > + if (likely(refcount_inc_not_zero(&sktw->sk_refcnt)))
> > + return 1;
> > }
> >
>
> Thanks for CC me.
>
> Nice analysis from Anderson ! Have you found this with a fuzzer ?
>
> This patch would avoid the refcount splat, but would leave side
> effects on tp, I am too lazy to double check them.
Ah exactly :)
>
> Incidentally, I think we have to annotate data-races on
> tcptw->tw_ts_recent and tcptw->tw_ts_recent_stamp
>
> Perhaps something like this instead ?
This looks good to me.
>
> diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
> index 0427deca3e0eb9239558aa124a41a1525df62a04..f1e3707d0b33180a270e6d3662d4cf17a4f72bb8
> 100644
> --- a/net/ipv4/tcp_ipv4.c
> +++ b/net/ipv4/tcp_ipv4.c
> @@ -155,6 +155,10 @@ int tcp_twsk_unique(struct sock *sk, struct sock
> *sktw, void *twp)
> if (tcptw->tw_ts_recent_stamp &&
> (!twp || (reuse && time_after32(ktime_get_seconds(),
> tcptw->tw_ts_recent_stamp)))) {
> +
> + if (!refcount_inc_not_zero(&sktw->sk_refcnt))
> + return 0;
> +
> /* In case of repair and re-using TIME-WAIT sockets we still
> * want to be sure that it is safe as above but honor the
> * sequence numbers and time stamps set as part of the repair
> @@ -175,7 +179,6 @@ int tcp_twsk_unique(struct sock *sk, struct sock
> *sktw, void *twp)
> tp->rx_opt.ts_recent = tcptw->tw_ts_recent;
> tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp;
> }
> - sock_hold(sktw);
> return 1;
> }
>
Powered by blists - more mailing lists