lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b8aab6f-f341-49af-9ccb-d592e1a40fe5@linux.ibm.com>
Date: Wed, 15 Oct 2025 10:46:51 +0530
From: Nilay Shroff <nilay@...ux.ibm.com>
To: Ming Lei <ming.lei@...hat.com>, Yu Kuai <hailan@...uai.org.cn>
Cc: Yu Kuai <yukuai3@...wei.com>, tj@...nel.org, josef@...icpanda.com,
        axboe@...nel.dk, cgroups@...r.kernel.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, yukuai1@...weicloud.com,
        yi.zhang@...wei.com, yangerkun@...wei.com, johnny.chenyi@...wei.com
Subject: Re: [PATCH 0/4] blk-rq-qos: fix possible deadlock



On 10/15/25 7:12 AM, Ming Lei wrote:
> On Tue, Oct 14, 2025 at 07:14:16PM +0800, Yu Kuai wrote:
>> Hi,
>>
>> 在 2025/10/14 18:58, Nilay Shroff 写道:
>>>
>>> On 10/14/25 7:51 AM, Yu Kuai wrote:
>>>> Currently rq-qos debugfs entries is created from rq_qos_add(), while
>>>> rq_qos_add() requires queue to be freezed. This can deadlock because
>>>>
>>>> creating new entries can trigger fs reclaim.
>>>>
>>>> Fix this problem by delaying creating rq-qos debugfs entries until
>>>> it's initialization is complete.
>>>>
>>>> - For wbt, it can be initialized by default of by blk-sysfs, fix it by
>>>>    delaying after wbt_init();
>>>> - For other policies, they can only be initialized by blkg configuration,
>>>>    fix it by delaying to blkg_conf_end();
>>>>
>>>> Noted this set is cooked on the top of my other thread:
>>>> https://lore.kernel.org/all/20251010091446.3048529-1-yukuai@kernel.org/
>>>>
>>>> And the deadlock can be reporduced with above thead, by running blktests
>>>> throtl/001 with wbt enabled by default. While the deadlock is really a
>>>> long term problem.
>>>>
>>> While freezing the queue we also mark GFP_NOIO scope, so doesn't that
>>> help avoid fs-reclaim? Or maybe if you can share the lockdep splat
>>> encountered running throtl/001?
>>
>> Yes, we can avoid fs-reclaim if queue is freezing, however,
>> because debugfs is a generic file system, and we can't avoid fs reclaim from
>> all context. There is still
>>
>> Following is the log with above set and wbt enabled by default, the set acquire
>> lock order by:
>>
>> freeze queue -> elevator lock -> rq_qos_mutex -> blkcg_mutex
>>
>> However, fs-reclaim from other context cause the deadlock report.
>>
>>
>> [   45.632372][  T531] ======================================================
>> [   45.633734][  T531] WARNING: possible circular locking dependency detected
>> [   45.635062][  T531] 6.17.0-gfd4a560a0864-dirty #30 Not tainted
>> [   45.636220][  T531] ------------------------------------------------------
>> [   45.637587][  T531] check/531 is trying to acquire lock:
>> [   45.638626][  T531] ffff9473884382b0 (&q->rq_qos_mutex){+.+.}-{4:4}, at: blkg_
>> conf_start+0x116/0x190
>> [   45.640416][  T531]
>> [   45.640416][  T531] but task is already holding lock:
>> [   45.641828][  T531] ffff9473884385d8 (&q->elevator_lock){+.+.}-{4:4}, at: blkg
>> _conf_start+0x108/0x190
>> [   45.643322][  T531]
>> [   45.643322][  T531] which lock already depends on the new lock.
>> [   45.643322][  T531]
>> [   45.644862][  T531]
>> [   45.644862][  T531] the existing dependency chain (in reverse order) is:
>> [   45.646046][  T531]
>> [   45.646046][  T531] -> #5 (&q->elevator_lock){+.+.}-{4:4}:
>> [   45.647052][  T531]        __mutex_lock+0xd3/0x8d0
>> [   45.647716][  T531]        blkg_conf_start+0x108/0x190
>> [   45.648395][  T531]        tg_set_limit+0x74/0x300
>> [   45.649046][  T531]        kernfs_fop_write_iter+0x14a/0x210
>> [   45.649813][  T531]        vfs_write+0x29e/0x550
>> [   45.650413][  T531]        ksys_write+0x74/0xf0
>> [   45.651032][  T531]        do_syscall_64+0xbb/0x380
>> [   45.651730][  T531] entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> Not sure why elevator lock is grabbed in throttle code, which looks a elevator lock
> misuse, what does the elevator try to protect here?
> 
> The comment log doesn't explain the usage too:
> 
Lets go back to the history:
The ->elevator_lock was first added in the wbt code path under this commit 
245618f8e45f ("block: protect wbt_lat_usec using q->elevator_lock"). It was
introduced to protect the wbt latency and state updates which could be 
simultaneously accessed from elevator switch path and from sysfs write method
(queue_wb_lat_store()) as well as from cgroup (ioc_qos_write()).

Later above change caused a lockdep splat and then we updated the code 
to fix locking order between ->freeze_lock, ->elevator_lock and ->rq_qos_mutex
and that was implemented in this commit 9730763f4756 ("block: correct locking
order for protecting blk-wbt parameters"). With this change we set the 
locking order as follows: 
->freeze_lock ->elevator_lock ->rq_qos_mutex

Then later on under this commit 78c271344b6f ("block: move wbt_enable_default()
out of queue freezing from sched ->exit()") we moved the wbt latency/stat
update code out of the ->freeze_lock and ->elevator_lock from elevator switch
path. So essentially with this commit now in theory we don't need to acquire
->elevator_lock while updating wbt latency/stat values. In fact, we also removed
->elevator_lock  from queue_wb_lat_store() in this commit but I think we missed
to remove ->elevator_lock from cgroup (ioc_qos_write()). 

> 
> I think it is still order issue between queue freeze and q->rq_qos_mutex
> first, which need to be solved first.
> 
So yes we should first target to get rid off the use of ->elevator_lock
from ioc_qos_write(). Later we can decide on locking order between
->freeze_lock, ->rq_qos_mutex and ->debugfs_mutex. 

Thanks,
--Nilay





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ