lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8cca91f6-cfe2-4ef7-a072-dd48c3ee243b@fnnas.com>
Date: Mon, 17 Nov 2025 19:39:57 +0800
From: "Yu Kuai" <yukuai@...as.com>
To: "Ming Lei" <ming.lei@...hat.com>, "Nilay Shroff" <nilay@...ux.ibm.com>
Cc: <axboe@...nel.dk>, <linux-block@...r.kernel.org>, 
	<linux-kernel@...r.kernel.org>, <tj@...nel.org>, 
	"Yu Kuai" <yukuai@...as.com>
Subject: Re: [PATCH RESEND 1/5] block/blk-rq-qos: add a new helper rq_qos_add_freezed()

Hi,

在 2025/11/17 19:30, Ming Lei 写道:
> On Mon, Nov 17, 2025 at 04:43:11PM +0530, Nilay Shroff wrote:
>>
>> On 11/17/25 4:31 PM, Ming Lei wrote:
>>> On Sun, Nov 16, 2025 at 12:10:20PM +0800, Yu Kuai wrote:
>>>> queue should not be freezed under rq_qos_mutex, see example index
>>>> commit 9730763f4756 ("block: correct locking order for protecting blk-wbt
>>>> parameters"), which means current implementation of rq_qos_add() is
>>>> problematic. Add a new helper and prepare to fix this problem in
>>>> following patches.
>>>>
>>>> Signed-off-by: Yu Kuai <yukuai@...as.com>
>>>> ---
>>>>   block/blk-rq-qos.c | 27 +++++++++++++++++++++++++++
>>>>   block/blk-rq-qos.h |  2 ++
>>>>   2 files changed, 29 insertions(+)
>>>>
>>>> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
>>>> index 654478dfbc20..353397d7e126 100644
>>>> --- a/block/blk-rq-qos.c
>>>> +++ b/block/blk-rq-qos.c
>>>> @@ -322,6 +322,33 @@ void rq_qos_exit(struct request_queue *q)
>>>>   	mutex_unlock(&q->rq_qos_mutex);
>>>>   }
>>>>   
>>>> +int rq_qos_add_freezed(struct rq_qos *rqos, struct gendisk *disk,
>>>> +		       enum rq_qos_id id, const struct rq_qos_ops *ops)
>>>> +{
>>>> +	struct request_queue *q = disk->queue;
>>>> +
>>>> +	WARN_ON_ONCE(q->mq_freeze_depth == 0);
>>>> +	lockdep_assert_held(&q->rq_qos_mutex);
>>>> +
>>>> +	if (rq_qos_id(q, id))
>>>> +		return -EBUSY;
>>>> +
>>>> +	rqos->disk = disk;
>>>> +	rqos->id = id;
>>>> +	rqos->ops = ops;
>>>> +	rqos->next = q->rq_qos;
>>>> +	q->rq_qos = rqos;
>>>> +	blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
>>>> +
>>>> +	if (rqos->ops->debugfs_attrs) {
>>>> +		mutex_lock(&q->debugfs_mutex);
>>>> +		blk_mq_debugfs_register_rqos(rqos);
>>>> +		mutex_unlock(&q->debugfs_mutex);
>>>> +	}
>>> It will cause more lockdep splat to let q->debugfs_mutex depend on queue freeze,
>>>
>> I think we already have that ->debugfs_mutex dependency on ->freeze_lock.
>> for instance,
>>    ioc_qos_write  => freeze-queue
>>     blk_iocost_init
>>       rq_qos_add
> Why is queue freeze needed in above code path?
>
> Also blk_iolatency_init()/rq_qos_add() doesn't freeze queue.

I don't quite understand, rq_qos_add() always require queue freeze, prevent
deference q->rq_qos from IO path concurrently.

>
>> and also,
>>    queue_wb_lat_store  => freeze-queue
>>      wbt_init
>>        rq_qos_add
> Not all wbt_enable_default()/wbt_init() is called with queue frozen, but Kuai's
> patchset changes all to freeze queue before registering debugfs entry, people will
> complain new warning.

Yes, but the same as above, rq_qos_add() from wbt_init() will always freeze queue
before this set, so I don't understand why is there new warning?

>
>>> Also blk_mq_debugfs_register_rqos() does _not_ require queue to be frozen,
>>> and it should be fine to move blk_mq_debugfs_register_rqos() out of queue
>>> freeze.
>>>
>> Yes correct, but I thought this pacthset is meant only to address incorrect
>> locking order between ->rq_qos_mutex and ->freeze_lock. So do you suggest
>> also refactoring code to avoid ->debugfs_mutex dependency on ->freeze_lock?
>> If yes then shouldn't that be handled in a separate patchset?
> It is fine to fix in that way, but at least regression shouldn't be caused.
>
> More importantly we shouldn't add new unnecessary dependency on queue freeze.

This is correct, I'll work on the v2 set to move debugfs_mutex outside of freeze
queue, however, as you suggested before we should we should fix this incorrect
lock order first. How about I make them in a single set?

>
> Thanks,
> Ming
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ