[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220821125751.4185-1-yin31149@gmail.com>
Date: Sun, 21 Aug 2022 20:57:50 +0800
From: Hawkins Jiawei <yin31149@...il.com>
To: syzbot+7f0483225d0c94cb3441@...kaller.appspotmail.com,
David Howells <dhowells@...hat.com>,
Marc Dionne <marc.dionne@...istor.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: linux-afs@...ts.infradead.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, syzkaller-bugs@...glegroups.com,
linux-kernel-mentees@...ts.linuxfoundation.org,
paskripkin@...il.com, skhan@...uxfoundation.org,
18801353760@....com, Hawkins Jiawei <yin31149@...il.com>
Subject: [PATCH] rxrpc: fix bad unlock balance in rxrpc_do_sendmsg
Syzkaller reports bad unlock balance bug as follows:
------------[ cut here ]------------
WARNING: bad unlock balance detected!
syz-executor.0/4094 is trying to release lock (&call->user_mutex) at:
[<ffffffff87c1d8d1>] rxrpc_do_sendmsg+0x851/0x1110 net/rxrpc/sendmsg.c:754
but there are no more locks to release!
other info that might help us debug this:
no locks held by syz-executor.0/4094.
stack backtrace:
[...]
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x57/0x7d lib/dump_stack.c:106
print_unlock_imbalance_bug include/trace/events/lock.h:69 [inline]
__lock_release kernel/locking/lockdep.c:5333 [inline]
lock_release.cold+0x49/0x4e kernel/locking/lockdep.c:5686
__mutex_unlock_slowpath+0x99/0x5e0 kernel/locking/mutex.c:907
rxrpc_do_sendmsg+0x851/0x1110 net/rxrpc/sendmsg.c:754
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg+0xab/0xe0 net/socket.c:734
____sys_sendmsg+0x5c2/0x7a0 net/socket.c:2485
___sys_sendmsg+0xdb/0x160 net/socket.c:2539
__sys_sendmsg+0xc3/0x160 net/socket.c:2568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
[...]
</TASK>
------------------------------------
When kernel wants to send a message through an RxRPC socket in
rxrpc_do_sendmsg(), kernel should hold the call->user_mutex lock,
or it will triggers bug when releasing this lock before returning
from rxrpc_do_sendmsg().
Yet the problem is that during rxrpc_do_sendmsg(), kernel may call
rxrpc_wait_for_tx_window_intr() to wait for space to appear in the
tx queue or a signal to occur. When kernel fails the
mutex_lock_interruptible(), kernel will returns from the
rxrpc_wait_for_tx_window_intr() without acquiring the mutex lock, then
triggers bug when releasing the mutex lock in rxrpc_do_sendmsg().
This patch solves it by acquiring the call->user_mutex lock, when
kernel fails the mutex_lock_interruptible() before returning from
the rxrpc_wait_for_tx_window_intr().
Reported-and-tested-by: syzbot+7f0483225d0c94cb3441@...kaller.appspotmail.com
Signed-off-by: Hawkins Jiawei <yin31149@...il.com>
---
net/rxrpc/sendmsg.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
index 1d38e279e2ef..e13043d357d5 100644
--- a/net/rxrpc/sendmsg.c
+++ b/net/rxrpc/sendmsg.c
@@ -53,8 +53,10 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx,
trace_rxrpc_transmit(call, rxrpc_transmit_wait);
mutex_unlock(&call->user_mutex);
*timeo = schedule_timeout(*timeo);
- if (mutex_lock_interruptible(&call->user_mutex) < 0)
+ if (mutex_lock_interruptible(&call->user_mutex) < 0) {
+ mutex_lock(&call->user_mutex);
return sock_intr_errno(*timeo);
+ }
}
}
--
2.25.1
Powered by blists - more mailing lists