[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260211001851.2821-1-hdanton@sina.com>
Date: Wed, 11 Feb 2026 08:18:49 +0800
From: Hillf Danton <hdanton@...a.com>
To: "Ionut Nechita (Wind River)" <ionut.nechita@...driver.com>
Cc: axboe@...nel.dk,
linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
bigeasy@...utronix.de
Subject: Re: [PATCH v2 0/1] block/blk-mq: fix RT kernel regression with dedicated quiesce_sync_lock
On Tue, 10 Feb 2026 22:49:44 +0200 Ionut Nechita (Wind River) wrote:
> Hi Jens,
>
> This is v2 of the fix for the RT kernel performance regression caused by
> commit 679b1874eba7 ("block: fix ordering between checking
> QUEUE_FLAG_QUIESCED request adding").
>
> Changes since v1 (RESEND, Jan 9):
> - Rebased on top of axboe/for-7.0/block
> - No code changes
>
> The problem: on PREEMPT_RT kernels, the spinlock_t queue_lock added in
> blk_mq_run_hw_queue() converts to a sleeping rt_mutex, causing all IRQ
> threads (one per MSI-X vector) to serialize. On megaraid_sas with 8
> MSI-X vectors, throughput drops from 640 MB/s to 153 MB/s.
>
> The fix introduces a dedicated raw_spinlock_t quiesce_sync_lock that
> does not convert to rt_mutex on RT kernels. The critical section is
> provably short (only flag and counter checks), making raw_spinlock safe.
>
> Test results on RT kernel (megaraid_sas with 8 MSI-X vectors):
> - Before: 153 MB/s, 6-8 IRQ threads in D-state
> - After: 640 MB/s, 0 IRQ threads blocked
>
Because the top waiter is allowed to spin on rtmutex owner, the D-state
irq threads are expected.
OTOH raw spinlock offers nothing for top waiter, which is the extra price
for resuming the throughput.
Powered by blists - more mailing lists