lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 12 Dec 2021 21:44:25 -0800
From:   Davidlohr Bueso <dave@...olabs.net>
To:     axboe@...nel.dk
Cc:     bigeasy@...utronix.de, tglx@...utronix.de,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        dave@...olabs.net, Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH] blk-mq: make synchronous hw_queue runs RT friendly

Disabling preemption for the synchronous part of __blk_mq_delay_run_hw_queue()
is to ensure that the hw queue runs in the correct CPU. This does not play
well with PREEMPT_RT as regular spinlocks can be taken at this time (such as
the hctx->lock), triggering scheduling while atomic scenarios.

Introduce regions to mark starting and ending such cases and allow RT to
instead disable migration. While this actually better documents what is
occurring (as it is not about preemption but CPU locality), doing so for the
regular non-RT case can be too expensive. Similarly, instead of relying on
preemption or migration tricks, the task could also be affined to the valid
cpumask, but that too would be unnecessarily expensive.

Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
 block/blk-mq.c | 32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8874a63ae952..d44b851fffba 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1841,6 +1841,30 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 	return next_cpu;
 }
 
+/*
+ * Mark regions to ensure that a synchronous hardware queue
+ * runs on a correct CPU.
+ */
+#ifndef CONFIG_PREEMPT_RT
+static inline void blk_mq_start_sync_run_hw_queue(void)
+{
+	preempt_disable();
+}
+static inline void blk_mq_end_sync_run_hw_queue(void)
+{
+	preempt_enable();
+}
+#else
+static inline void blk_mq_start_sync_run_hw_queue(void)
+{
+	migrate_disable();
+}
+static inline void blk_mq_end_sync_run_hw_queue(void)
+{
+	migrate_enable();
+}
+#endif
+
 /**
  * __blk_mq_delay_run_hw_queue - Run (or schedule to run) a hardware queue.
  * @hctx: Pointer to the hardware queue to run.
@@ -1857,14 +1881,14 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 		return;
 
 	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
-		int cpu = get_cpu();
-		if (cpumask_test_cpu(cpu, hctx->cpumask)) {
+		blk_mq_start_sync_run_hw_queue();
+		if (cpumask_test_cpu(smp_processor_id(), hctx->cpumask)) {
 			__blk_mq_run_hw_queue(hctx);
-			put_cpu();
+			blk_mq_end_sync_run_hw_queue();
 			return;
 		}
 
-		put_cpu();
+		blk_mq_end_sync_run_hw_queue();
 	}
 
 	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
-- 
2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ