[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191229162543.385065589@linuxfoundation.org>
Date: Sun, 29 Dec 2019 18:20:21 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Mike Christie <mchristi@...hat.com>,
Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 4.19 219/219] nbd: fix shutdown and recv work deadlock v2
From: Mike Christie <mchristi@...hat.com>
commit 1c05839aa973cfae8c3db964a21f9c0eef8fcc21 upstream.
This fixes a regression added with:
commit e9e006f5fcf2bab59149cb38a48a4817c1b538b4
Author: Mike Christie <mchristi@...hat.com>
Date: Sun Aug 4 14:10:06 2019 -0500
nbd: fix max number of supported devs
where we can deadlock during device shutdown. The problem occurs if
the recv_work's nbd_config_put occurs after nbd_start_device_ioctl has
returned and the userspace app has droppped its reference via closing
the device and running nbd_release. The recv_work nbd_config_put call
would then drop the refcount to zero and try to destroy the config which
would try to do destroy_workqueue from the recv work.
This patch just has nbd_start_device_ioctl do a flush_workqueue when it
wakes so we know after the ioctl returns running works have exited. This
also fixes a possible race where we could try to reuse the device while
old recv_works are still running.
Cc: stable@...r.kernel.org
Fixes: e9e006f5fcf2 ("nbd: fix max number of supported devs")
Signed-off-by: Mike Christie <mchristi@...hat.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/block/nbd.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1247,10 +1247,10 @@ static int nbd_start_device_ioctl(struct
mutex_unlock(&nbd->config_lock);
ret = wait_event_interruptible(config->recv_wq,
atomic_read(&config->recv_threads) == 0);
- if (ret) {
+ if (ret)
sock_shutdown(nbd);
- flush_workqueue(nbd->recv_workq);
- }
+ flush_workqueue(nbd->recv_workq);
+
mutex_lock(&nbd->config_lock);
nbd_bdev_reset(bdev);
/* user requested, ignore socket errors */
Powered by blists - more mailing lists