[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240605111055.1843-1-hdanton@sina.com>
Date: Wed, 5 Jun 2024 19:10:55 +0800
From: Hillf Danton <hdanton@...a.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: Tejun Heo <tj@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
linux-kernel@...r.kernel.org,
Gal Pressman <gal@...dia.com>,
Tariq Toukan <tariqt@...dia.com>,
RDMA mailing list <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH -rc] workqueue: Reimplement UAF fix to avoid lockdep worning
On Tue, 4 Jun 2024 21:58:04 +0300 Leon Romanovsky <leon@...nel.org>
> On Tue, Jun 04, 2024 at 06:30:49AM -1000, Tejun Heo wrote:
> > On Tue, Jun 04, 2024 at 02:38:34PM +0300, Leon Romanovsky wrote:
> > > Thanks, it is very rare situation where call to flush/drain queue
> > > (in our case kthread_flush_worker) in the middle of the allocation
> > > flow can be correct. I can't remember any such case.
> > >
> > > So even we don't fully understand the root cause, the reimplementation
> > > is still valid and improves existing code.
> >
> > It's not valid. pwq release is async and while wq free in the error path
> > isn't. The flush is there so that we finish the async part before
> > synchronize error handling. The patch you posted will can lead to double
> > free after a pwq allocation failure. We can make the error path synchronous
> > but the pwq free path should be updated first so that it stays synchronous
> > in the error path. Note that it *needs* to be asynchronous in non-error
> > paths, so it's going to be a bit subtle one way or the other.
>
> But at that point, we didn't add newly created WQ to any list which will execute
> that asynchronous release. Did I miss something?
>
Maybe it is more subtle than thought, but not difficult to make the wq
allocation path sync. See if the patch could survive your test.
--- x/include/linux/workqueue.h
+++ y/include/linux/workqueue.h
@@ -402,6 +402,7 @@ enum wq_flags {
*/
WQ_POWER_EFFICIENT = 1 << 7,
+ __WQ_INITIALIZING = 1 << 14, /* internal: workqueue is initializing */
__WQ_DESTROYING = 1 << 15, /* internal: workqueue is destroying */
__WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
__WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
--- x/kernel/workqueue.c
+++ y/kernel/workqueue.c
@@ -5080,6 +5080,8 @@ static void pwq_release_workfn(struct kt
* is gonna access it anymore. Schedule RCU free.
*/
if (is_last) {
+ if (wq->flags & __WQ_INITIALIZING)
+ return;
wq_unregister_lockdep(wq);
call_rcu(&wq->rcu, rcu_free_wq);
}
@@ -5714,8 +5716,10 @@ struct workqueue_struct *alloc_workqueue
goto err_unreg_lockdep;
}
+ wq->flags |= __WQ_INITIALIZING;
if (alloc_and_link_pwqs(wq) < 0)
goto err_free_node_nr_active;
+ wq->flags &= ~__WQ_INITIALIZING;
if (wq_online && init_rescuer(wq) < 0)
goto err_destroy;
--
Powered by blists - more mailing lists