[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyBaqn_HOoHX+YinKW5YSy1rncfbvYXktkEtmFgK44E9wg@mail.gmail.com>
Date: Tue, 2 Sep 2025 18:12:10 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-rt-devel@...ts.linux.dev, linux-kernel@...r.kernel.org,
Clark Williams <clrkwllms@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Steven Rostedt <rostedt@...dmis.org>, Tejun Heo <tj@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 1/3] workqueue: Provide a handshake for canceling BH workers
Hello
On Tue, Sep 2, 2025 at 12:38 AM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
>
> While a BH work item is canceled, the core code spins until it
> determines that the item completed. On PREEMPT_RT the spinning relies on
> a lock in local_bh_disable() to avoid a live lock if the canceling
> thread has higher priority than the BH-worker and preempts it. This lock
> ensures that the BH-worker makes progress by PI-boosting it.
>
> This lock in local_bh_disable() is a central per-CPU BKL and about to be
> removed.
>
> To provide the required synchronisation add a per pool lock. The lock is
> acquired by the bh_worker at the begin while the individual callbacks
> are invoked. To enforce progress in case of interruption, __flush_work()
> needs to acquire the lock.
> This will flush all BH-work items assigned to that pool.
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
> kernel/workqueue.c | 51 ++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 42 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index c6b79b3675c31..94e226f637992 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -222,7 +222,9 @@ struct worker_pool {
> struct workqueue_attrs *attrs; /* I: worker attributes */
> struct hlist_node hash_node; /* PL: unbound_pool_hash node */
> int refcnt; /* PL: refcnt for unbound pools */
> -
> +#ifdef CONFIG_PREEMPT_RT
> + spinlock_t cb_lock; /* BH worker cancel lock */
> +#endif
> /*
Is it possible to use rt_mutex_init_proxy_locked(), rt_mutex_proxy_unlock()
and rt_mutex_wait_proxy_lock()?
Or is it possible to add something like rt_spinlock_init_proxy_locked(),
rt_spinlock_proxy_unlock() and rt_spinlock_wait_proxy_lock() which work
the same as the rt_mutex's proxy lock primitives but for non-sleep context?
I think they will work as an rt variant of struct completion and
they can be used for __flush_work() for BH work for preempt_rt
as the same way as wait_for_completion() is used for normal work.
Thanks
Lai
Powered by blists - more mailing lists