[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190218133530.819359890@linuxfoundation.org>
Date: Mon, 18 Feb 2019 14:42:54 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jia-Ju Bai <baijiaju1990@....com>,
Lars Ellenberg <lars.ellenberg@...bit.com>,
Roland Kammerer <roland.kammerer@...bit.com>,
Jens Axboe <axboe@...nel.dk>, Sasha Levin <sashal@...nel.org>
Subject: [PATCH 4.4 046/143] drbd: narrow rcu_read_lock in drbd_sync_handshake
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
[ Upstream commit d29e89e34952a9ad02c77109c71a80043544296e ]
So far there was the possibility that we called
genlmsg_new(GFP_NOIO)/mutex_lock() while holding an rcu_read_lock().
This included cases like:
drbd_sync_handshake (acquire the RCU lock)
drbd_asb_recover_1p
drbd_khelper
drbd_bcast_event
genlmsg_new(GFP_NOIO) --> may sleep
drbd_sync_handshake (acquire the RCU lock)
drbd_asb_recover_1p
drbd_khelper
notify_helper
genlmsg_new(GFP_NOIO) --> may sleep
drbd_sync_handshake (acquire the RCU lock)
drbd_asb_recover_1p
drbd_khelper
notify_helper
mutex_lock --> may sleep
While using GFP_ATOMIC whould have been possible in the first two cases,
the real fix is to narrow the rcu_read_lock.
Reported-by: Jia-Ju Bai <baijiaju1990@....com>
Reviewed-by: Lars Ellenberg <lars.ellenberg@...bit.com>
Signed-off-by: Roland Kammerer <roland.kammerer@...bit.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/block/drbd/drbd_receiver.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index b4b5680ac6ad..2fedab9349f6 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -3126,7 +3126,7 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device,
enum drbd_conns rv = C_MASK;
enum drbd_disk_state mydisk;
struct net_conf *nc;
- int hg, rule_nr, rr_conflict, tentative;
+ int hg, rule_nr, rr_conflict, tentative, always_asbp;
mydisk = device->state.disk;
if (mydisk == D_NEGOTIATING)
@@ -3168,8 +3168,12 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device,
rcu_read_lock();
nc = rcu_dereference(peer_device->connection->net_conf);
+ always_asbp = nc->always_asbp;
+ rr_conflict = nc->rr_conflict;
+ tentative = nc->tentative;
+ rcu_read_unlock();
- if (hg == 100 || (hg == -100 && nc->always_asbp)) {
+ if (hg == 100 || (hg == -100 && always_asbp)) {
int pcount = (device->state.role == R_PRIMARY)
+ (peer_role == R_PRIMARY);
int forced = (hg == -100);
@@ -3208,9 +3212,6 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device,
"Sync from %s node\n",
(hg < 0) ? "peer" : "this");
}
- rr_conflict = nc->rr_conflict;
- tentative = nc->tentative;
- rcu_read_unlock();
if (hg == -100) {
/* FIXME this log message is not correct if we end up here
--
2.19.1
Powered by blists - more mailing lists