[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <430683f5-d271-b2f2-8f7c-486e2a4a7d42@huawei.com>
Date: Thu, 24 Feb 2022 10:43:20 +0800
From: "yukuai (C)" <yukuai3@...wei.com>
To: Ming Lei <ming.lei@...hat.com>
CC: <axboe@...nel.dk>, <linux-block@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <yi.zhang@...wei.com>
Subject: Re: [PATCH RFC] blk-mq: fix potential uaf for 'queue_hw_ctx'
在 2022/02/24 10:15, Ming Lei 写道:
>> Hi, Ming
>>
>> If blk_mq_quiesce_queue() is called from __blk_mq_update_nr_hw_queues()
>> first, and then swithing elevator to none won't trigger the problem.
>> However, what if blk_mq_unquiesce_queue() from switching elevator
>> decrease quiesce_depth to 0 first, and then blk_mq_quiesce_queue() is
>> called from __blk_mq_update_nr_hw_queues(), it seems to me such
>> concurrent scenarios still exist.
>
> No, the scenario won't exist, once blk_mq_quiesce_queue() returns, it is
> guaranteed that:
>
> - in-progress run queue is drained
> - no new run queue can be started
I understand that... What I mean about the concurrent scenario is that
reading queue_hw_ctx in blk_mq_run_hw_queues(), not the actual run
queue blk_mq_run_hw_queue():
t1 t2
elevator_switch
blk_mq_quiesce_queue -> quiesce_depth = 1
blk_mq_unquiesce_queue-> quiesce_depth = 0
blk_mq_run_hw_queues
__blk_mq_update_nr_hw_queues
blk_mq_quiesce_queue
queue_for_each_hw_ctx
-> quiesce_queue can't prevent reading queue_hw_ctx
blk_mq_run_hw_queue
//need_run is always false, nothing to do
Am I missing something about blk_mq_quiesce_queue ?
Thanks,
Kuai
Powered by blists - more mailing lists