lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Dec 2022 17:20:17 +0800
From:   richard clark <richard.xnu.clark@...il.com>
To:     Lai Jiangshan <jiangshanlai@...il.com>
Cc:     tj@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: work item still be scheduled to execute after destroy_workqueue?

On Tue, Dec 6, 2022 at 2:23 PM Lai Jiangshan <jiangshanlai@...il.com> wrote:
>
> On Tue, Dec 6, 2022 at 12:35 PM richard clark
> <richard.xnu.clark@...il.com> wrote:
>
> > >
> > A WARN is definitely reasonable and has its benefits. Can I try to
> > submit the patch and you're nice to review as maintainer?
> >
> > Thanks,
> > Richard
> > >
>
> Sure, go ahead.
>
> What's in my mind is that the following code is wrapped in a new function:
>
>         mutex_lock(&wq->mutex);
>         if (!wq->nr_drainers++)
>                 wq->flags |= __WQ_DRAINING;
>         mutex_unlock(&wq->mutex);
>
>
> and the new function replaces the open code drain_workqueue() and
> is also called in destroy_workqueue() (before calling drain_workqueue()).
>
Except that, do we need to defer the __WQ_DRAINING clean to the
rcu_call, thus we still have a close-loop of the drainer's count, like
this?

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c

@@ -3528,6 +3526,9 @@ static void rcu_free_wq(struct rcu_head *rcu)

        else
                free_workqueue_attrs(wq->unbound_attrs);

+       if (!--wq->nr_drainers)
+               wq->flags &= ~__WQ_DRAINING;
+
        kfree(wq);

>
> __WQ_DRAINING will cause the needed WARN on illegally queuing items on
> destroyed workqueue.

I will re-test it if there are no concerns about the above fix...

>
> Thanks
> Lai

Powered by blists - more mailing lists