[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMhUBj=i4MJ6KH_UU5dy8e+DmviRg4EFA-D5zyD=XfRi9Ma=pg@mail.gmail.com>
Date: Tue, 8 Mar 2022 19:08:58 +0800
From: Zheyu Ma <zheyuma97@...il.com>
To: axboe@...nel.dk
Cc: linux-block@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [BUG] block: sx8: Invalid wait context in carm_queue_rq()
Hi,
I found a bug in the sx8 driver when probing this driver.
I have no idea about how this happened, it seems like a misuse of the lock.
With LOCKDEP=y, the following log can reveal it:
[ 3.403123] =============================
[ 3.403205] [ BUG: Invalid wait context ]
[ 3.403205] 5.16.0-rc1+ #29 Not tainted
[ 3.403205] -----------------------------
[ 3.403205] kworker/5:1/68 is trying to lock:
[ 3.403205] ffff888012c80060 (&entry->access){+.+.}-{3:3}, at:
carm_queue_rq+0x110/0x1290
[ 3.403205] other info that might help us debug this:
[ 3.403205] context-{4:4}
[ 3.403205] 3 locks held by kworker/5:1/68:
[ 3.403205] #0: ffff888100068d38
((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x644/0xaf0
[ 3.403205] #1: ffff888105f17d68
((work_completion)(&host->fsm_task)){+.+.}-{0:0}, at:
process_one_work+0x68c/0xaf0
[ 3.403205] #2: ffffffff8e441b60 (rcu_read_lock){....}-{1:2}, at:
rcu_lock_acquire+0x0/0x20
[ 3.403205] stack backtrace:
[ 3.403205] CPU: 5 PID: 68 Comm: kworker/5:1 Not tainted 5.16.0-rc1+ #29
[ 3.403205] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),
BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
[ 3.403205] Workqueue: events carm_fsm_task
[ 3.403205] Call Trace:
[ 3.403205] <TASK>
[ 3.403205] dump_stack_lvl+0x5d/0x78
[ 3.403205] __lock_acquire+0x144a/0x1e20
[ 3.403205] lock_acquire+0x101/0x2d0
[ 3.403205] ? carm_queue_rq+0x110/0x1290
[ 3.403205] _raw_spin_lock+0x2a/0x40
[ 3.403205] ? carm_queue_rq+0x110/0x1290
[ 3.403205] carm_queue_rq+0x110/0x1290
[ 3.403205] ? __blk_mq_get_driver_tag+0x2da/0x780
[ 3.403205] blk_mq_dispatch_rq_list+0xcd0/0x24f0
[ 3.403205] ? rcu_read_lock_sched_held+0x2f/0x70
[ 3.403205] ? lock_release+0x47e/0x720
[ 3.403205] __blk_mq_sched_dispatch_requests+0x2f8/0x3a0
[ 3.403205] blk_mq_sched_dispatch_requests+0xc1/0xf0
[ 3.403205] __blk_mq_run_hw_queue+0x86/0xe0
[ 3.403205] __blk_mq_delay_run_hw_queue+0x1b3/0x490
[ 3.403205] ? rcu_lock_acquire+0x20/0x20
[ 3.403205] blk_mq_run_hw_queue+0x137/0x300
[ 3.403205] blk_mq_sched_insert_request+0x13e/0x2c0
[ 3.403205] process_one_work+0x6d8/0xaf0
[ 3.403205] worker_thread+0x9bd/0x14a0
[ 3.403205] kthread+0x38b/0x470
[ 3.403205] ? rcu_lock_release+0x20/0x20
[ 3.403205] ? kthread_unuse_mm+0x170/0x170
[ 3.403205] ret_from_fork+0x22/0x30
[ 3.403205] </TASK>
Regards,
Zheyu Ma
Powered by blists - more mailing lists