[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOi1vP-75uoBBsnX262WoVL_jNreiSgnGmtytDKcsUE==ny2Jw@mail.gmail.com>
Date: Thu, 30 Jan 2020 13:52:32 +0100
From: Ilya Dryomov <idryomov@...il.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Ceph Development <ceph-devel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH] rbd: optimize barrier usage for Rmw atomic bitops
On Wed, Jan 29, 2020 at 7:23 PM Davidlohr Bueso <dave@...olabs.net> wrote:
>
> For both set and clear_bit, we can avoid the unnecessary barrier
> on non LL/SC architectures, such as x86. Instead, use the
> smp_mb__{before,after}_atomic() calls.
>
> Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
> ---
> drivers/block/rbd.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 2b184563cd32..7bc79b2b8f65 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -1371,13 +1371,13 @@ static void rbd_osd_submit(struct ceph_osd_request *osd_req)
> static void img_request_layered_set(struct rbd_img_request *img_request)
> {
> set_bit(IMG_REQ_LAYERED, &img_request->flags);
> - smp_mb();
> + smp_mb__after_atomic();
> }
>
> static void img_request_layered_clear(struct rbd_img_request *img_request)
> {
> clear_bit(IMG_REQ_LAYERED, &img_request->flags);
> - smp_mb();
> + smp_mb__after_atomic();
> }
>
> static bool img_request_layered_test(struct rbd_img_request *img_request)
Hi Davidlohr,
I don't think these barriers are needed at all. I'll remove them.
Thanks,
Ilya
Powered by blists - more mailing lists