[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoBqfC=Gg8=AVEFC1APWXq8mFtXBmu+6jt=MnsVYWc6cig@mail.gmail.com>
Date: Mon, 12 Aug 2024 08:36:13 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: davem@...emloft.net, dsahern@...nel.org, edumazet@...gle.com, fw@...len.de,
kuba@...nel.org, linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
pabeni@...hat.com, syzbot+8ea26396ff85d23a8929@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [net?] WARNING: refcount bug in inet_twsk_kill
On Mon, Aug 12, 2024 at 7:09 AM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
>
> From: Kuniyuki Iwashima <kuniyu@...zon.com>
> Date: Sun, 11 Aug 2024 16:00:29 -0700
> > From: Florian Westphal <fw@...len.de>
> > Date: Sun, 11 Aug 2024 18:28:50 +0200
> > > Florian Westphal <fw@...len.de> wrote:
> > > > https://syzkaller.appspot.com/x/log.txt?x=117f3182980000
> > > >
> > > > ... shows at two cores racing:
> > > >
> > > > [ 3127.234402][ T1396] CPU: 3 PID: 1396 Comm: syz-executor.3 Not
> > > > and
> > > > [ 3127.257864][ T13] CPU: 1 PID: 13 Comm: kworker/u32:1 Not tainted 6.9.0-syzkalle (netns cleanup net).
> > > >
> > > >
> > > > first splat backtrace shows invocation of tcp_sk_exit_batch() from
> > > > netns error unwinding code.
> > > >
> > > > Second one lacks backtrace, but its also in tcp_sk_exit_batch(),
> > >
> > > ... which doesn't work. Does this look like a plausible
> > > theory/exlanation?
> >
> > Yes! The problem here is that inet_twsk_purge() operates on twsk
> > not in net_exit_list, but I think such a check is overkill and we
> > can work around it in another way.
> >
> >
> > >
> > > Given:
> > > 1 exiting netns, has >= 1 tw sk.
> > > 1 (unrelated) netns that failed in setup_net
> > >
> > > ... we run into following race:
> > >
> > > exiting netns, from cleanup wq, calls tcp_sk_exit_batch(), which calls
> > > inet_twsk_purge(&tcp_hashinfo).
> > >
> > > At same time, from error unwinding code, we also call tcp_sk_exit_batch().
> > >
> > > Both threads walk tcp_hashinfo ehash buckets.
> > >
> > > From work queue (normal netns exit path), we hit
> > >
> > > 303 if (state == TCP_TIME_WAIT) {
> > > 304 inet_twsk_deschedule_put(inet_twsk(sk));
> > >
> > > Because both threads operate on tcp_hashinfo, the unrelated
> > > struct net (exiting net) is also visible to error-unwinding thread.
> > >
> > > So, error unwinding code will call
> > >
> > > 303 if (state == TCP_TIME_WAIT) {
> > > 304 inet_twsk_deschedule_put(inet_twsk(sk));
> > >
> > > for the same tw sk and both threads do
> > >
> > > 218 void inet_twsk_deschedule_put(struct inet_timewait_sock *tw)
> > > 219 {
> > > 220 if (del_timer_sync(&tw->tw_timer))
> > > 221 inet_twsk_kill(tw);
> > >
> > > Error unwind path cancel timer, calls inet_twsk_kill, while
> > > work queue sees timer as already shut-down so it ends up
> > > returning to tcp_sk_exit_batch(), where it will WARN here:
> > >
> > > WARN_ON_ONCE(!refcount_dec_and_test(&net->ipv4.tcp_death_row.tw_refcount));
> > >
> > > ... because the supposedly-last tw_refcount decrement did not drop
> > > it down to 0.
> > >
> > > Meanwhile, error unwiding thread calls refcount_dec() on
> > > tw_refcount, which now drops down to 0 instead of 1, which
> > > provides another warn splat.
Right, last night I should have thought of it. I noticed that two
'killers' may access the inet_twsk_kill() concurrently (during that
time, I paid attention to the tcp_abort() which can cause such an
issue), but I easily let it go :(
Thanks for the detailed explanation:)
> > >
> > > I'll ponder on ways to fix this tomorrow unless someone
> > > else already has better theory/solution.
> >
> > We need to sync two inet_twsk_kill(), so maybe give up one
> > if twsk is not hashed ?
> >
> > ---8<---
> > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
> > index 337390ba85b4..51889567274b 100644
> > --- a/net/ipv4/inet_timewait_sock.c
> > +++ b/net/ipv4/inet_timewait_sock.c
> > @@ -52,7 +52,10 @@ static void inet_twsk_kill(struct inet_timewait_sock *tw)
> > struct inet_bind_hashbucket *bhead, *bhead2;
> >
> > spin_lock(lock);
> > - sk_nulls_del_node_init_rcu((struct sock *)tw);
> > + if (!sk_nulls_del_node_init_rcu((struct sock *)tw)) {
> > + spin_unlock(lock);
> > + return false;
>
> forgot to remove false, just return :)
It does make sense to me !
Thanks,
Jason
>
>
> > + }
> > spin_unlock(lock);
> >
> > /* Disassociate with bind bucket. */
> > ---8<---
>
Powered by blists - more mailing lists