lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 May 2022 18:56:22 +0800
From:   "yukuai (C)" <yukuai3@...wei.com>
To:     Ming Lei <ming.lei@...hat.com>
CC:     <axboe@...nel.dk>, <linux-block@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <yi.zhang@...wei.com>
Subject: Re: [PATCH -next v2] blk-mq: fix panic during blk_mq_run_work_fn()

在 2022/05/20 17:53, Ming Lei 写道:
> On Fri, May 20, 2022 at 04:49:19PM +0800, yukuai (C) wrote:
>> 在 2022/05/20 16:34, Ming Lei 写道:
>>> On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
>>>> 在 2022/05/20 14:23, yukuai (C) 写道:
>>>>> 在 2022/05/20 11:44, Ming Lei 写道:
>>>>>> On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
>>>>>>> Our test report a following crash:
>>>>>>>
>>>>>>> BUG: kernel NULL pointer dereference, address: 0000000000000018
>>>>>>> PGD 0 P4D 0
>>>>>>> Oops: 0000 [#1] SMP NOPTI
>>>>>>> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
>>>>>>> O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
>>>>>>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
>>>>>>> rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
>>>>>>> Workqueue: kblockd blk_mq_run_work_fn
>>>>>>> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
>>>>>>> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
>>>>>>> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
>>>>>>> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
>>>>>>> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
>>>>>>> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
>>>>>>> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
>>>>>>> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
>>>>>>> knlGS:0000000000000000
>>>>>>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>>>> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
>>>>>>> Call Trace:
>>>>>>>     __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>>>>>>>     ? newidle_balance+0x23e/0x2f0
>>>>>>>     __blk_mq_sched_dispatch_requests+0x13f/0x190
>>>>>>>     blk_mq_sched_dispatch_requests+0x30/0x60
>>>>>>>     __blk_mq_run_hw_queue+0x47/0xd0
>>>>>>>     process_one_work+0x1b0/0x350
>>>>>>>     worker_thread+0x49/0x300
>>>>>>>     ? rescuer_thread+0x3a0/0x3a0
>>>>>>>     kthread+0xfe/0x140
>>>>>>>     ? kthread_park+0x90/0x90
>>>>>>>     ret_from_fork+0x22/0x30
>>>>>>>
>>>>>>> After digging from vmcore, I found that the queue is cleaned
>>>>>>> up(blk_cleanup_queue() is done) and tag set is
>>>>>>> freed(blk_mq_free_tag_set() is done).
>>>>>>>
>>>>>>> There are two problems here:
>>>>>>>
>>>>>>> 1) blk_mq_delay_run_hw_queues() will only be called from
>>>>>>> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
>>>>>>> This seems impossible because blk_cleanup_queue() is done, and there
>>>>>>> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
>>>>>>> more accurate") fix the problem in bfq. And currently ohter schedulers
>>>>>>> don't have such problem.
>>>>>>>
>>>>>>> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
>>>>>>> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
>>>>>>> all the 'run_work'. However, there is no guarantee that new 'run_work'
>>>>>>> won't be queued after that(and before blk_mq_exit_queue() is done).
>>>>>>
>>>>>> It is blk_mq_run_hw_queue() caller's responsibility to grab
>>>>>> ->q_usage_counter for avoiding queue cleaned up, so please fix the user
>>>>>> side.
>>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for your advice.
>>>>>
>>>>> blk_mq_run_hw_queue() can be called async, in order to do that, what I
>>>>> can think of is that grab 'q_usage_counte' before queuing 'run->work'
>>>>> and release it after. Which is very similar to this patch...
>>>>
>>>> Hi,
>>>>
>>>> How do you think about following change:
>>>>
>>>
>>> I think the issue is in blk_mq_map_queue_type() which may touch tagset.
>>>
>>> So please try the following patch:
>>>
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index ed1869a305c4..5789e971ac83 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
>>>     */
>>>    static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>>>    {
>>> -	struct blk_mq_hw_ctx *hctx;
>>> -
>>> +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
>>>    	/*
>>>    	 * If the IO scheduler does not respect hardware queues when
>>>    	 * dispatching, we just don't bother with multiple HW queues and
>>> @@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>>>    	 * just causes lock contention inside the scheduler and pointless cache
>>>    	 * bouncing.
>>>    	 */
>>> -	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
>>> -				     raw_smp_processor_id());
>>> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
>>> +
>>>    	if (!blk_mq_hctx_stopped(hctx))
>>>    		return hctx;
>>>    	return NULL;
>>
>> Hi, Ming
>>
>> This patch do make sense, however, this doesn't fix the root cause, it
> 
> Isn't the root cause that tagset is referred after blk_cleanup_queue
> returns?

No, it's not the root cause. If we can make sure 'hctx->run_work' won't
exist after blk_cleanup_queue(), such problem won't be triggered.

Actually, blk_cleaup_queue() already call blk_mq_cancel_work_sync() to
do that, however, new 'hctx->run_work' can be queued after that.
> 
>> just bypass the problem like commit ddc25c86b466 ("block, bfq: make
>> bfq_has_work() more accurate"), which will prevent
>> blk_mq_delay_run_hw_queues() to be called in such case.
> 
> How can?
See the call trace:

__blk_mq_do_dispatch_sched+0x2a7/0x2c0
? newidle_balance+0x23e/0x2f0
__blk_mq_sched_dispatch_requests+0x13f/0x190
blk_mq_sched_dispatch_requests+0x30/0x60
__blk_mq_run_hw_queue+0x47/0xd0
process_one_work+0x1b0/0x350 -> hctx->run_work

details how blk_mq_delay_run_hw_queues() is called:
__blk_mq_do_dispatch_sched
  if (e->type->ops.has_work && !e->type->ops.has_work(hctx))
   break -> has_work has to return true.

  rq = e->type->ops.dispatch_request(hctx);
  if (!rq)
   run_queue = true
   break; -> dispatch has to failed

  if (run_queue)
   blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);

Thus if 'has_work' is accurate, blk_mq_delay_run_hw_queues() won't be
called if there is no io.
> 
>>
>> I do think we need to make sure 'run_work' doesn't exist after
>> blk_cleanup_queue().
> 
> Both hctx and request queue are fine to be referred after blk_cleanup_queue
> returns, what can't be referred is tagset.

I agree with that, however, I think we still need to reach an agreement
about root cause of this problem...

Thanks,
Kuai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ