[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20190814021548.16001-29-sashal@kernel.org>
Date: Tue, 13 Aug 2019 22:15:07 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: David Howells <dhowells@...hat.com>,
syzbot+72af434e4b3417318f84@...kaller.appspotmail.com,
Marc Dionne <marc.dionne@...istor.com>,
Jeffrey Altman <jaltman@...istor.com>,
Sasha Levin <sashal@...nel.org>, linux-afs@...ts.infradead.org,
netdev@...r.kernel.org
Subject: [PATCH AUTOSEL 4.19 29/68] rxrpc: Fix potential deadlock
From: David Howells <dhowells@...hat.com>
[ Upstream commit 60034d3d146b11922ab1db613bce062dddc0327a ]
There is a potential deadlock in rxrpc_peer_keepalive_dispatch() whereby
rxrpc_put_peer() is called with the peer_hash_lock held, but if it reduces
the peer's refcount to 0, rxrpc_put_peer() calls __rxrpc_put_peer() - which
the tries to take the already held lock.
Fix this by providing a version of rxrpc_put_peer() that can be called in
situations where the lock is already held.
The bug may produce the following lockdep report:
============================================
WARNING: possible recursive locking detected
5.2.0-next-20190718 #41 Not tainted
--------------------------------------------
kworker/0:3/21678 is trying to acquire lock:
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at: spin_lock_bh
/./include/linux/spinlock.h:343 [inline]
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at:
__rxrpc_put_peer /net/rxrpc/peer_object.c:415 [inline]
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at:
rxrpc_put_peer+0x2d3/0x6a0 /net/rxrpc/peer_object.c:435
but task is already holding lock:
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at: spin_lock_bh
/./include/linux/spinlock.h:343 [inline]
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at:
rxrpc_peer_keepalive_dispatch /net/rxrpc/peer_event.c:378 [inline]
00000000aa5eecdf (&(&rxnet->peer_hash_lock)->rlock){+.-.}, at:
rxrpc_peer_keepalive_worker+0x6b3/0xd02 /net/rxrpc/peer_event.c:430
Fixes: 330bdcfadcee ("rxrpc: Fix the keepalive generator [ver #2]")
Reported-by: syzbot+72af434e4b3417318f84@...kaller.appspotmail.com
Signed-off-by: David Howells <dhowells@...hat.com>
Reviewed-by: Marc Dionne <marc.dionne@...istor.com>
Reviewed-by: Jeffrey Altman <jaltman@...istor.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
net/rxrpc/ar-internal.h | 1 +
net/rxrpc/peer_event.c | 2 +-
net/rxrpc/peer_object.c | 18 ++++++++++++++++++
3 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 03e0fc8c183f0..a4c341828b72f 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -1057,6 +1057,7 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *);
struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *);
struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *);
void rxrpc_put_peer(struct rxrpc_peer *);
+void rxrpc_put_peer_locked(struct rxrpc_peer *);
/*
* proc.c
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index bd2fa3b7caa7e..dc7fdaf20445b 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -375,7 +375,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
spin_lock_bh(&rxnet->peer_hash_lock);
list_add_tail(&peer->keepalive_link,
&rxnet->peer_keepalive[slot & mask]);
- rxrpc_put_peer(peer);
+ rxrpc_put_peer_locked(peer);
}
spin_unlock_bh(&rxnet->peer_hash_lock);
diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
index 5691b7d266ca0..71547e8673b99 100644
--- a/net/rxrpc/peer_object.c
+++ b/net/rxrpc/peer_object.c
@@ -440,6 +440,24 @@ void rxrpc_put_peer(struct rxrpc_peer *peer)
}
}
+/*
+ * Drop a ref on a peer record where the caller already holds the
+ * peer_hash_lock.
+ */
+void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
+{
+ const void *here = __builtin_return_address(0);
+ int n;
+
+ n = atomic_dec_return(&peer->usage);
+ trace_rxrpc_peer(peer, rxrpc_peer_put, n, here);
+ if (n == 0) {
+ hash_del_rcu(&peer->hash_link);
+ list_del_init(&peer->keepalive_link);
+ kfree_rcu(peer, rcu);
+ }
+}
+
/*
* Make sure all peer records have been discarded.
*/
--
2.20.1
Powered by blists - more mailing lists