lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJNi4rMDOSq6-ba4CV88E7e-f8Wzq0e6M5bYV8LBS=LzLb7--Q@mail.gmail.com>
Date:   Thu, 8 Dec 2022 10:44:16 +0800
From:   richard clark <richard.xnu.clark@...il.com>
To:     Lai Jiangshan <jiangshanlai@...il.com>
Cc:     tj@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: work item still be scheduled to execute after destroy_workqueue?

On Wed, Dec 7, 2022 at 10:38 AM Lai Jiangshan <jiangshanlai@...il.com> wrote:
>
> On Tue, Dec 6, 2022 at 5:20 PM richard clark
> <richard.xnu.clark@...il.com> wrote:
> >
> > On Tue, Dec 6, 2022 at 2:23 PM Lai Jiangshan <jiangshanlai@...il.com> wrote:
> > >
> > > On Tue, Dec 6, 2022 at 12:35 PM richard clark
> > > <richard.xnu.clark@...il.com> wrote:
> > >
> > > > >
> > > > A WARN is definitely reasonable and has its benefits. Can I try to
> > > > submit the patch and you're nice to review as maintainer?
> > > >
> > > > Thanks,
> > > > Richard
> > > > >
> > >
> > > Sure, go ahead.
> > >
> > > What's in my mind is that the following code is wrapped in a new function:
> > >
> > >         mutex_lock(&wq->mutex);
> > >         if (!wq->nr_drainers++)
> > >                 wq->flags |= __WQ_DRAINING;
> > >         mutex_unlock(&wq->mutex);
> > >
> > >
> > > and the new function replaces the open code drain_workqueue() and
> > > is also called in destroy_workqueue() (before calling drain_workqueue()).
> > >
> > Except that, do we need to defer the __WQ_DRAINING clean to the
> > rcu_call, thus we still have a close-loop of the drainer's count, like
> > this?
>
> No, I don't think we need it. The wq is totally freed in rcu_free_wq.
>
> Or we can just introduce __WQ_DESTROYING.
>
> It seems using __WQ_DESTROYING is better.

The wq->flags will be unreliable after kfree(wq), for example, in my
machine, the wq->flags can be 0x7ec1e1a3, 0x37cff1a3 or 0x7fa23da3 ...
after wq be kfreed, consequently the result of queueing a new work
item to a kfreed wq is undetermined, sometimes it's ok because the
queue_work will return directly(e.g, the wq->flags = 0x7ec1e1a3, a
fake __WQ_DRAINING state), sometimes it will trigger a kernel NULL
pointer dereference BUG when the wq->flags = 0x7fa23da3(fake
!__WQ_DRAINING state).

IMO, given the above condition,  we can handle this in 2 phases:
before the rcu_call and after.
a. before rcu_call. Using __WQ_DESTROYING to allow the chained work
queued in or not in destroy_workqueue(...) level, __WQ_DRAINING is
used to make the drain_workqueue(...) still can be standalone. The
code snippet like this:
destroy_workqueue(...)
{
        mutex_lock(&wq->mutex);
        wq->flags |= __WQ_DESTROYING;
        mutex_lock(&wq->mutex);
        ...
}

__queue_work(...)
{
          if (unlikely((wq->flags & __WQ_DESTROYING) || (wq->flags &
__WQ_DRAINING)) &&
                   WARN_ON_ONCE(!is_chained_work(wq)))
         return;
}

b. after rcu_call. What in my mind is:
rcu_free_wq(struct rcu_head *rcu)
{
          ...
          kfree(wq);
          wq = NULL;
}

__queue_work(...)
{
        if (!wq)
                return;
        ...
}

Any comments?

>
> >
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> >
> > @@ -3528,6 +3526,9 @@ static void rcu_free_wq(struct rcu_head *rcu)
> >
> >         else
> >                 free_workqueue_attrs(wq->unbound_attrs);
> >
> > +       if (!--wq->nr_drainers)
> > +               wq->flags &= ~__WQ_DRAINING;
> > +
> >         kfree(wq);
> >
> > >
> > > __WQ_DRAINING will cause the needed WARN on illegally queuing items on
> > > destroyed workqueue.
> >
> > I will re-test it if there are no concerns about the above fix...
> >
> > >
> > > Thanks
> > > Lai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ