lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240811230029.95258-1-kuniyu@amazon.com>
Date: Sun, 11 Aug 2024 16:00:29 -0700
From: Kuniyuki Iwashima <kuniyu@...zon.com>
To: <fw@...len.de>
CC: <davem@...emloft.net>, <dsahern@...nel.org>, <edumazet@...gle.com>,
	<kuba@...nel.org>, <kuniyu@...zon.com>, <linux-kernel@...r.kernel.org>,
	<netdev@...r.kernel.org>, <pabeni@...hat.com>,
	<syzbot+8ea26396ff85d23a8929@...kaller.appspotmail.com>,
	<syzkaller-bugs@...glegroups.com>
Subject: Re: [syzbot] [net?] WARNING: refcount bug in inet_twsk_kill

From: Florian Westphal <fw@...len.de>
Date: Sun, 11 Aug 2024 18:28:50 +0200
> Florian Westphal <fw@...len.de> wrote:
> > https://syzkaller.appspot.com/x/log.txt?x=117f3182980000
> > 
> > ... shows at two cores racing:
> > 
> > [ 3127.234402][ T1396] CPU: 3 PID: 1396 Comm: syz-executor.3 Not
> > and
> > [ 3127.257864][   T13] CPU: 1 PID: 13 Comm: kworker/u32:1 Not tainted 6.9.0-syzkalle (netns cleanup net).
> > 
> > 
> > first splat backtrace shows invocation of tcp_sk_exit_batch() from
> > netns error unwinding code.
> > 
> > Second one lacks backtrace, but its also in tcp_sk_exit_batch(),
> 
> ... which doesn't work.  Does this look like a plausible
> theory/exlanation?

Yes!  The problem here is that inet_twsk_purge() operates on twsk
not in net_exit_list, but I think such a check is overkill and we
can work around it in another way.


> 
> Given:
> 1 exiting netns, has >= 1 tw sk.
> 1 (unrelated) netns that failed in setup_net
> 
> ... we run into following race:
> 
> exiting netns, from cleanup wq, calls tcp_sk_exit_batch(), which calls
> inet_twsk_purge(&tcp_hashinfo).
> 
> At same time, from error unwinding code, we also call tcp_sk_exit_batch().
> 
> Both threads walk tcp_hashinfo ehash buckets.
> 
> From work queue (normal netns exit path), we hit
> 
> 303                         if (state == TCP_TIME_WAIT) {
> 304                                 inet_twsk_deschedule_put(inet_twsk(sk));
> 
> Because both threads operate on tcp_hashinfo, the unrelated
> struct net (exiting net) is also visible to error-unwinding thread.
> 
> So, error unwinding code will call
> 
> 303                         if (state == TCP_TIME_WAIT) {
> 304                                 inet_twsk_deschedule_put(inet_twsk(sk));
> 
> for the same tw sk and both threads do
> 
> 218 void inet_twsk_deschedule_put(struct inet_timewait_sock *tw)
> 219 {
> 220         if (del_timer_sync(&tw->tw_timer))
> 221                 inet_twsk_kill(tw);
> 
> Error unwind path cancel timer, calls inet_twsk_kill, while
> work queue sees timer as already shut-down so it ends up
> returning to tcp_sk_exit_batch(), where it will WARN here:
> 
>   WARN_ON_ONCE(!refcount_dec_and_test(&net->ipv4.tcp_death_row.tw_refcount));
> 
> ... because the supposedly-last tw_refcount decrement did not drop
> it down to 0.
> 
> Meanwhile, error unwiding thread calls refcount_dec() on
> tw_refcount, which now drops down to 0 instead of 1, which
> provides another warn splat.
> 
> I'll ponder on ways to fix this tomorrow unless someone
> else already has better theory/solution.

We need to sync two inet_twsk_kill(), so maybe give up one
if twsk is not hashed ?

---8<---
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 337390ba85b4..51889567274b 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -52,7 +52,10 @@ static void inet_twsk_kill(struct inet_timewait_sock *tw)
 	struct inet_bind_hashbucket *bhead, *bhead2;
 
 	spin_lock(lock);
-	sk_nulls_del_node_init_rcu((struct sock *)tw);
+	if (!sk_nulls_del_node_init_rcu((struct sock *)tw)) {
+		spin_unlock(lock);
+		return false;
+	}
 	spin_unlock(lock);
 
 	/* Disassociate with bind bucket. */
---8<---

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ