[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4f51449ca995ba60d78e7ba55e8bc9876328605c.1583078091.git.asml.silence@gmail.com>
Date: Sun, 1 Mar 2020 19:18:25 +0300
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 8/9] io-wq: optimise double lock for io_get_next_work()
When executing non-linked hashed work, io_worker_handle_work()
will lock-unlock wqe->lock to update hash, and then immediately
lock-unlock to get next work. Optimise this case and lock/unlock
only once.
Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
---
fs/io-wq.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/io-wq.c b/fs/io-wq.c
index da67c931db79..f9b18c16ebd8 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -474,11 +474,11 @@ static void io_worker_handle_work(struct io_worker *worker)
{
struct io_wqe *wqe = worker->wqe;
struct io_wq *wq = wqe->wq;
+ unsigned hash = -1U;
do {
struct io_wq_work *work;
- unsigned hash = -1U;
-
+get_next:
/*
* If we got some work, mark us as busy. If we didn't, but
* the list isn't empty, it means we stalled on hashed work.
@@ -528,6 +528,9 @@ static void io_worker_handle_work(struct io_worker *worker)
wqe->flags &= ~IO_WQE_FLAG_STALLED;
/* dependent work is not hashed */
hash = -1U;
+ /* skip unnecessary unlock-lock wqe->lock */
+ if (!work)
+ goto get_next;
spin_unlock_irq(&wqe->lock);
}
} while (work);
--
2.24.0
Powered by blists - more mailing lists