lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <875yfdkqm1.ffs@tglx>
Date:   Fri, 18 Nov 2022 02:17:58 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        Luiz Augusto von Dentz <luiz.dentz@...il.com>
Cc:     Hillf Danton <hdanton@...a.com>,
        syzbot <syzbot+6fb78d577e89e69602f9@...kaller.appspotmail.com>,
        linux-kernel@...r.kernel.org, pbonzini@...hat.com,
        syzkaller-bugs@...glegroups.com,
        Steven Rostedt <rosted@...dmis.org>,
        Marcel Holtmann <marcel@...tmann.org>
Subject: Re: [syzbot] WARNING in call_timer_fn

On Fri, Nov 18 2022 at 09:53, Tetsuo Handa wrote:
> On 2022/11/18 6:16, Luiz Augusto von Dentz wrote:
> The comment for drain_workqueue() says
>
>  * Wait until the workqueue becomes empty.  While draining is in progress,
>  * only chain queueing is allowed.  IOW, only currently pending or running
>  * work items on @wq can queue further work items on it.  @wq is flushed
>  * repeatedly until it becomes empty.  The number of flushing is determined
>  * by the depth of chaining and should be relatively short.  Whine if it
>  * takes too long.
>
> but why limited to "only currently pending or running work items on @wq" (that is,
> only process context) ?
>
> Although drain_workqueue() is also called from destroy_workqueue() (which would cause
> use-after-free bug if interrupt handler calls queue_work() some time after drain_workqueue()),
> I think that we can wait for drain_workqueue() to call __flush_workqueue() again if a further
> work item is queued from interrupt handler...

Which is correct because at that point it's expecting to only accept the
pending and chained pending work otherwise it will go on in circles and/or
just be unable to provide the functionality of draining work, right?

> Anyway, stopping all works and delayed works before calling
> drain_workqueue() would address this problem.

Only partially.

You also have to make sure that none of the works can be rearmed or
rescheduled after that point by any other context, e.g interrupts ...

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ