[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1556377989.929247906@decadent.org.uk>
Date: Sat, 27 Apr 2019 16:13:09 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, Denis Kirjanov <kda@...ux-powerpc.org>,
"Dongsheng Yang" <dongsheng.yang@...ystack.cn>,
"Ilya Dryomov" <idryomov@...il.com>
Subject: [PATCH 3.16 026/202] rbd: don't return 0 on unmap if
RBD_DEV_FLAG_REMOVING is set
3.16.66-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Ilya Dryomov <idryomov@...il.com>
commit 85f5a4d666fd9be73856ed16bb36c5af5b406b29 upstream.
There is a window between when RBD_DEV_FLAG_REMOVING is set and when
the device is removed from rbd_dev_list. During this window, we set
"already" and return 0.
Returning 0 from write(2) can confuse userspace tools because
0 indicates that nothing was written. In particular, "rbd unmap"
will retry the write multiple times a second:
10:28:05.463299 write(4, "0", 1) = 0
10:28:05.463509 write(4, "0", 1) = 0
10:28:05.463720 write(4, "0", 1) = 0
10:28:05.463942 write(4, "0", 1) = 0
10:28:05.464155 write(4, "0", 1) = 0
Signed-off-by: Ilya Dryomov <idryomov@...il.com>
Tested-by: Dongsheng Yang <dongsheng.yang@...ystack.cn>
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
drivers/block/rbd.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -5422,7 +5422,6 @@ static ssize_t do_rbd_remove(struct bus_
struct list_head *tmp;
int dev_id;
unsigned long ul;
- bool already = false;
int ret;
ret = kstrtoul(buf, 10, &ul);
@@ -5447,13 +5446,13 @@ static ssize_t do_rbd_remove(struct bus_
spin_lock_irq(&rbd_dev->lock);
if (rbd_dev->open_count)
ret = -EBUSY;
- else
- already = test_and_set_bit(RBD_DEV_FLAG_REMOVING,
- &rbd_dev->flags);
+ else if (test_and_set_bit(RBD_DEV_FLAG_REMOVING,
+ &rbd_dev->flags))
+ ret = -EINPROGRESS;
spin_unlock_irq(&rbd_dev->lock);
}
spin_unlock(&rbd_dev_list_lock);
- if (ret < 0 || already)
+ if (ret)
return ret;
rbd_dev_header_unwatch_sync(rbd_dev);
Powered by blists - more mailing lists