[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <PU1P153MB01690A7767ECDF420FF78D66BFC20@PU1P153MB0169.APCP153.PROD.OUTLOOK.COM>
Date: Sun, 28 Jul 2019 18:32:24 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Sunil Muthuswamy <sunilmut@...rosoft.com>,
David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"sashal@...nel.org" <sashal@...nel.org>,
Michael Kelley <mikelley@...rosoft.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"olaf@...fle.de" <olaf@...fle.de>,
"apw@...onical.com" <apw@...onical.com>,
"jasowang@...hat.com" <jasowang@...hat.com>,
vkuznets <vkuznets@...hat.com>,
"marcelo.cerri@...onical.com" <marcelo.cerri@...onical.com>
Subject: [PATCH net] hv_sock: Fix hang when a connection is closed
hvs_do_close_lock_held() may decrease the reference count to 0 and free the
sk struct completely, and then the following release_sock(sk) may hang.
Fixes: a9eeb998c28d ("hv_sock: Add support for delayed close")
Signed-off-by: Dexuan Cui <decui@...rosoft.com>
Cc: stable@...r.kernel.org
---
With the proper kernel debugging options enabled, first a warning can
appear:
kworker/1:0/4467 is freeing memory ..., with a lock still held there!
stack backtrace:
Workqueue: events vmbus_onmessage_work [hv_vmbus]
Call Trace:
dump_stack+0x67/0x90
debug_check_no_locks_freed.cold.52+0x78/0x7d
slab_free_freelist_hook+0x85/0x140
kmem_cache_free+0xa5/0x380
__sk_destruct+0x150/0x260
hvs_close_connection+0x24/0x30 [hv_sock]
vmbus_onmessage_work+0x1d/0x30 [hv_vmbus]
process_one_work+0x241/0x600
worker_thread+0x3c/0x390
kthread+0x11b/0x140
ret_from_fork+0x24/0x30
and then the following release_sock(sk) can hang:
watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:0:4467]
...
irq event stamp: 62890
CPU: 1 PID: 4467 Comm: kworker/1:0 Tainted: G W 5.2.0+ #39
Workqueue: events vmbus_onmessage_work [hv_vmbus]
RIP: 0010:queued_spin_lock_slowpath+0x2b/0x1e0
...
Call Trace:
do_raw_spin_lock+0xab/0xb0
release_sock+0x19/0xb0
vmbus_onmessage_work+0x1d/0x30 [hv_vmbus]
process_one_work+0x241/0x600
worker_thread+0x3c/0x390
kthread+0x11b/0x140
ret_from_fork+0x24/0x30
net/vmw_vsock/hyperv_transport.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
index f2084e3f7aa4..efbda8ef1eff 100644
--- a/net/vmw_vsock/hyperv_transport.c
+++ b/net/vmw_vsock/hyperv_transport.c
@@ -309,9 +309,16 @@ static void hvs_close_connection(struct vmbus_channel *chan)
{
struct sock *sk = get_per_channel_state(chan);
+ /* Grab an extra reference since hvs_do_close_lock_held() may decrease
+ * the reference count to 0 by calling sock_put(sk).
+ */
+ sock_hold(sk);
+
lock_sock(sk);
hvs_do_close_lock_held(vsock_sk(sk), true);
release_sock(sk);
+
+ sock_put(sk);
}
static void hvs_open_connection(struct vmbus_channel *chan)
--
2.19.1
Powered by blists - more mailing lists