lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Nov 2022 17:45:51 -0500
From:   Gabriel Krisman Bertazi <krisman@...e.de>
To:     axboe@...nel.dk
Cc:     mingo@...hat.com, peterz@...radead.org, jack@...e.cz,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
        liusong@...ux.alibaba.com, chaitanyak@...dia.com,
        Gabriel Krisman Bertazi <krisman@...e.de>
Subject: [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue

When a queue is awaken, the wake_index written by sbq_wake_ptr currently
keeps pointing to the same queue.  On the next wake up, it will thus
retry the same queue, which is unfair to other queues, and can lead to
starvation.  This patch, moves the index update to happen before the
queue is returned, such that it will now try a different queue first on
the next wake up, improving fairness.

Fixes: 4f8126bb2308 ("sbitmap: Use single per-bitmap counting to wake up queued tags")
Reported-by: Jan Kara <jack@...e.cz>
Reviewed-by: Jan Kara <jack@...e.cz>
Signed-off-by: Gabriel Krisman Bertazi <krisman@...e.de>
---
 lib/sbitmap.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index eca462cba398..bea7984f7987 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -571,13 +571,19 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
 	for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
 		struct sbq_wait_state *ws = &sbq->ws[wake_index];
 
+		/*
+		 * Advance the index before checking the current queue.
+		 * It improves fairness, by ensuring the queue doesn't
+		 * need to be fully emptied before trying to wake up
+		 * from the next one.
+		 */
+		wake_index = sbq_index_inc(wake_index);
+
 		if (waitqueue_active(&ws->wait)) {
 			if (wake_index != atomic_read(&sbq->wake_index))
 				atomic_set(&sbq->wake_index, wake_index);
 			return ws;
 		}
-
-		wake_index = sbq_index_inc(wake_index);
 	}
 
 	return NULL;
-- 
2.35.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ