[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <c03dbc61-6df0-4640-adc7-2e6d01ad42b9@fnnas.com>
Date: Mon, 17 Nov 2025 19:59:49 +0800
From: "Yu Kuai" <yukuai@...as.com>
To: "Ming Lei" <ming.lei@...hat.com>
Cc: "Nilay Shroff" <nilay@...ux.ibm.com>, <axboe@...nel.dk>,
<linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<tj@...nel.org>, "Yu Kuai" <yukuai@...as.com>
Subject: Re: [PATCH RESEND 1/5] block/blk-rq-qos: add a new helper rq_qos_add_freezed()
Hi,
在 2025/11/17 19:54, Ming Lei 写道:
> On Mon, Nov 17, 2025 at 07:39:57PM +0800, Yu Kuai wrote:
>> Hi,
>>
>> 在 2025/11/17 19:30, Ming Lei 写道:
>>> On Mon, Nov 17, 2025 at 04:43:11PM +0530, Nilay Shroff wrote:
>>>> On 11/17/25 4:31 PM, Ming Lei wrote:
>>>>> On Sun, Nov 16, 2025 at 12:10:20PM +0800, Yu Kuai wrote:
>>>>>> queue should not be freezed under rq_qos_mutex, see example index
>>>>>> commit 9730763f4756 ("block: correct locking order for protecting blk-wbt
>>>>>> parameters"), which means current implementation of rq_qos_add() is
>>>>>> problematic. Add a new helper and prepare to fix this problem in
>>>>>> following patches.
>>>>>>
>>>>>> Signed-off-by: Yu Kuai <yukuai@...as.com>
>>>>>> ---
>>>>>> block/blk-rq-qos.c | 27 +++++++++++++++++++++++++++
>>>>>> block/blk-rq-qos.h | 2 ++
>>>>>> 2 files changed, 29 insertions(+)
>>>>>>
>>>>>> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
>>>>>> index 654478dfbc20..353397d7e126 100644
>>>>>> --- a/block/blk-rq-qos.c
>>>>>> +++ b/block/blk-rq-qos.c
>>>>>> @@ -322,6 +322,33 @@ void rq_qos_exit(struct request_queue *q)
>>>>>> mutex_unlock(&q->rq_qos_mutex);
>>>>>> }
>>>>>>
>>>>>> +int rq_qos_add_freezed(struct rq_qos *rqos, struct gendisk *disk,
>>>>>> + enum rq_qos_id id, const struct rq_qos_ops *ops)
>>>>>> +{
>>>>>> + struct request_queue *q = disk->queue;
>>>>>> +
>>>>>> + WARN_ON_ONCE(q->mq_freeze_depth == 0);
>>>>>> + lockdep_assert_held(&q->rq_qos_mutex);
>>>>>> +
>>>>>> + if (rq_qos_id(q, id))
>>>>>> + return -EBUSY;
>>>>>> +
>>>>>> + rqos->disk = disk;
>>>>>> + rqos->id = id;
>>>>>> + rqos->ops = ops;
>>>>>> + rqos->next = q->rq_qos;
>>>>>> + q->rq_qos = rqos;
>>>>>> + blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
>>>>>> +
>>>>>> + if (rqos->ops->debugfs_attrs) {
>>>>>> + mutex_lock(&q->debugfs_mutex);
>>>>>> + blk_mq_debugfs_register_rqos(rqos);
>>>>>> + mutex_unlock(&q->debugfs_mutex);
>>>>>> + }
>>>>> It will cause more lockdep splat to let q->debugfs_mutex depend on queue freeze,
>>>>>
>>>> I think we already have that ->debugfs_mutex dependency on ->freeze_lock.
>>>> for instance,
>>>> ioc_qos_write => freeze-queue
>>>> blk_iocost_init
>>>> rq_qos_add
>>> Why is queue freeze needed in above code path?
>>>
>>> Also blk_iolatency_init()/rq_qos_add() doesn't freeze queue.
>> I don't quite understand, rq_qos_add() always require queue freeze, prevent
>> deference q->rq_qos from IO path concurrently.
>>
>>>> and also,
>>>> queue_wb_lat_store => freeze-queue
>>>> wbt_init
>>>> rq_qos_add
>>> Not all wbt_enable_default()/wbt_init() is called with queue frozen, but Kuai's
>>> patchset changes all to freeze queue before registering debugfs entry, people will
>>> complain new warning.
>> Yes, but the same as above, rq_qos_add() from wbt_init() will always freeze queue
>> before this set, so I don't understand why is there new warning?
> The in-tree rq_qos_add() registers debugfs after queue is unfreeze, but
> your patchset basically moves queue freeze/unfreeze to callsite of rq_qos_add(),
> then debugfs register is always done with queue frozen.
>
> Dependency between queue freeze and q->debugfs_mutex is introduced in some
> code paths, such as, elevator switch, blk_iolatency_init, ..., this way
> will trigger warning because it isn't strange to run into memory
> allocation in debugfs_create_*().
Yes, I realized I do misunderstand in previous email.
>
>>>>> Also blk_mq_debugfs_register_rqos() does _not_ require queue to be frozen,
>>>>> and it should be fine to move blk_mq_debugfs_register_rqos() out of queue
>>>>> freeze.
>>>>>
>>>> Yes correct, but I thought this pacthset is meant only to address incorrect
>>>> locking order between ->rq_qos_mutex and ->freeze_lock. So do you suggest
>>>> also refactoring code to avoid ->debugfs_mutex dependency on ->freeze_lock?
>>>> If yes then shouldn't that be handled in a separate patchset?
>>> It is fine to fix in that way, but at least regression shouldn't be caused.
>>>
>>> More importantly we shouldn't add new unnecessary dependency on queue freeze.
>> This is correct, I'll work on the v2 set to move debugfs_mutex outside of freeze
>> queue, however, as you suggested before we should we should fix this incorrect
>> lock order first. How about I make them in a single set?
> That is fine, but patches for moving debugfs_mutex should be put before
> this patchset, which is always friendly for 'git bisect'.
Sounds good :)
Thanks,
Kuai
>
>
> Thanks,
> Ming
>
>
Powered by blists - more mailing lists