[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260210204943.21709-3-ionut.nechita@windriver.com>
Date: Tue, 10 Feb 2026 22:49:44 +0200
From: "Ionut Nechita (Wind River)" <ionut.nechita@...driver.com>
To: axboe@...nel.dk
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org, ming.lei@...hat.com,
muchun.song@...ux.dev, mkhalfella@...estorage.com,
sunlightlinux@...il.com, chris.friesen@...driver.com,
stable@...r.kernel.org, ionut_n2001@...oo.com, bigeasy@...utronix.de,
ionut.nechita@...driver.com
Subject: [PATCH v2 0/1] block/blk-mq: fix RT kernel regression with dedicated quiesce_sync_lock
Hi Jens,
This is v2 of the fix for the RT kernel performance regression caused by
commit 679b1874eba7 ("block: fix ordering between checking
QUEUE_FLAG_QUIESCED request adding").
Changes since v1 (RESEND, Jan 9):
- Rebased on top of axboe/for-7.0/block
- No code changes
The problem: on PREEMPT_RT kernels, the spinlock_t queue_lock added in
blk_mq_run_hw_queue() converts to a sleeping rt_mutex, causing all IRQ
threads (one per MSI-X vector) to serialize. On megaraid_sas with 8
MSI-X vectors, throughput drops from 640 MB/s to 153 MB/s.
The fix introduces a dedicated raw_spinlock_t quiesce_sync_lock that
does not convert to rt_mutex on RT kernels. The critical section is
provably short (only flag and counter checks), making raw_spinlock safe.
In past used memory barriers but was rejected due to barrier pairing
complexity across multiple call sites (as noted by Muchun Song).
Ionut Nechita (1):
block/blk-mq: fix RT kernel regression with dedicated
quiesce_sync_lock
block/blk-core.c | 1 +
block/blk-mq.c | 27 ++++++++++++++++-----------
include/linux/blkdev.h | 6 ++++++
3 files changed, 23 insertions(+), 11 deletions(-)
--
2.52.0
Powered by blists - more mailing lists