[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200129181253.24999-1-dave@stgolabs.net>
Date: Wed, 29 Jan 2020 10:12:53 -0800
From: Davidlohr Bueso <dave@...olabs.net>
To: idryomov@...il.com
Cc: ceph-devel@...r.kernel.org, linux-kernel@...r.kernel.org,
dave@...olabs.net, Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH] rbd: optimize barrier usage for Rmw atomic bitops
For both set and clear_bit, we can avoid the unnecessary barrier
on non LL/SC architectures, such as x86. Instead, use the
smp_mb__{before,after}_atomic() calls.
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
drivers/block/rbd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 2b184563cd32..7bc79b2b8f65 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -1371,13 +1371,13 @@ static void rbd_osd_submit(struct ceph_osd_request *osd_req)
static void img_request_layered_set(struct rbd_img_request *img_request)
{
set_bit(IMG_REQ_LAYERED, &img_request->flags);
- smp_mb();
+ smp_mb__after_atomic();
}
static void img_request_layered_clear(struct rbd_img_request *img_request)
{
clear_bit(IMG_REQ_LAYERED, &img_request->flags);
- smp_mb();
+ smp_mb__after_atomic();
}
static bool img_request_layered_test(struct rbd_img_request *img_request)
--
2.16.4
Powered by blists - more mailing lists