[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1634822969.git.asml.silence@gmail.com>
Date: Thu, 21 Oct 2021 14:30:50 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: linux-block@...r.kernel.org
Cc: Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
Tejun Heo <tj@...nel.org>, Dennis Zhou <dennis@...nel.org>,
Pavel Begunkov <asml.silence@...il.com>
Subject: [PATCH v2 0/2] optimise blk_try_enter_queue()
Kill extra rcu_read_lock/unlock() pair in blk_try_enter_queue().
Testing with io_uring (high batching) with nullblk:
Before:
3.20% io_uring [kernel.vmlinux] [k] __rcu_read_unlock
3.05% io_uring [kernel.vmlinux] [k] __rcu_read_lock
After:
2.52% io_uring [kernel.vmlinux] [k] __rcu_read_unlock
2.28% io_uring [kernel.vmlinux] [k] __rcu_read_lock
Doesn't necessarily translates into 1.4% perfofrmance improvement
but nice to have.
v2: rcu_read_lock_held() warning (Tejun)
Pavel Begunkov (2):
percpu_ref: percpu_ref_tryget_live() version holding RCU
block: kill extra rcu lock/unlock in queue enter
block/blk-core.c | 2 +-
include/linux/percpu-refcount.h | 33 +++++++++++++++++++++++----------
2 files changed, 24 insertions(+), 11 deletions(-)
--
2.33.1
Powered by blists - more mailing lists