[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1634759754.git.asml.silence@gmail.com>
Date: Wed, 20 Oct 2021 21:03:17 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: linux-block@...r.kernel.org
Cc: Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
Tejun Heo <tj@...nel.org>, Dennis Zhou <dennis@...nel.org>,
Pavel Begunkov <asml.silence@...il.com>
Subject: [PATCH 0/2] optimise blk_try_enter_queue()
Kill extra rcu_read_lock/unlock() pair in blk_try_enter_queue().
Testing with io_uring (high batching) with nullblk:
Before:
3.20% io_uring [kernel.vmlinux] [k] __rcu_read_unlock
3.05% io_uring [kernel.vmlinux] [k] __rcu_read_lock
After:
2.52% io_uring [kernel.vmlinux] [k] __rcu_read_unlock
2.28% io_uring [kernel.vmlinux] [k] __rcu_read_lock
Doesn't necessarily translates into 1.4% perfofrmance improvement
but nice to have.
Pavel Begunkov (2):
percpu_ref: percpu_ref_tryget_live() version holding RCU
block: kill extra rcu lock/unlock in queue enter
block/blk-core.c | 2 +-
include/linux/percpu-refcount.h | 31 +++++++++++++++++++++----------
2 files changed, 22 insertions(+), 11 deletions(-)
--
2.33.1
Powered by blists - more mailing lists