[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d38fe3c3-1f0b-b5f6-2895-aee9476b20bf@I-love.SAKURA.ne.jp>
Date: Fri, 18 Nov 2022 09:53:41 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Luiz Augusto von Dentz <luiz.dentz@...il.com>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Hillf Danton <hdanton@...a.com>,
syzbot <syzbot+6fb78d577e89e69602f9@...kaller.appspotmail.com>,
linux-kernel@...r.kernel.org, pbonzini@...hat.com,
syzkaller-bugs@...glegroups.com,
Steven Rostedt <rosted@...dmis.org>,
Marcel Holtmann <marcel@...tmann.org>
Subject: Re: [syzbot] WARNING in call_timer_fn
On 2022/11/18 6:16, Luiz Augusto von Dentz wrote:
> Wasn't the following patch suppose to address such problem:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=deee93d13d385103205879a8a0915036ecd83261
>
> It was merged in the last pull request to net-next:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=a507ea32b9c2c407012bf89109ac0cf89fae313c
>
No. Commit deee93d13d38 ("Bluetooth: use hdev->workqueue when queuing hdev->{cmd,ncmd}_timer works")
is for handling queue_work() from hci_cmd_timeout(struct work_struct *work) from process_one_work()
(that is, called from a kernel workqueue thread from process context).
But this report says that queue_work() is called from timer interrupt handler from interrupt context
while drain_workqueue(hdev->workqueue) is in progress from process context.
But... is is_chained_work() check appropriate?
Why can't we exclude "timer interrupt handler" from "somebody else" ?
The comment for drain_workqueue() says
* Wait until the workqueue becomes empty. While draining is in progress,
* only chain queueing is allowed. IOW, only currently pending or running
* work items on @wq can queue further work items on it. @wq is flushed
* repeatedly until it becomes empty. The number of flushing is determined
* by the depth of chaining and should be relatively short. Whine if it
* takes too long.
but why limited to "only currently pending or running work items on @wq" (that is,
only process context) ?
Although drain_workqueue() is also called from destroy_workqueue() (which would cause
use-after-free bug if interrupt handler calls queue_work() some time after drain_workqueue()),
I think that we can wait for drain_workqueue() to call __flush_workqueue() again if a further
work item is queued from interrupt handler...
Anyway, stopping all works and delayed works before calling drain_workqueue() would address
this problem.
Powered by blists - more mailing lists