[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyD7x9QLJ+uoRnbh4qOhphdxJU4c384D1Ph2tn5ktR_=kw@mail.gmail.com>
Date: Tue, 2 Sep 2025 22:19:26 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-rt-devel@...ts.linux.dev, linux-kernel@...r.kernel.org,
Clark Williams <clrkwllms@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Steven Rostedt <rostedt@...dmis.org>, Tejun Heo <tj@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 1/3] workqueue: Provide a handshake for canceling BH workers
Hello, Sebastian
On Tue, Sep 2, 2025 at 7:17 PM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
> >
> > Is it possible to use rt_mutex_init_proxy_locked(), rt_mutex_proxy_unlock()
> > and rt_mutex_wait_proxy_lock()?
> >
> > Or is it possible to add something like rt_spinlock_init_proxy_locked(),
> > rt_spinlock_proxy_unlock() and rt_spinlock_wait_proxy_lock() which work
> > the same as the rt_mutex's proxy lock primitives but for non-sleep context?
>
> I don't think so. I think non-sleep context is the killer part. Those
> are for PI and this works by assigning waiter's priority, going to sleep
> until "it" is done. Now if you want non-sleep then you would have to
> remain on the CPU and spin until the "work" is done. This spinning would
> work if the other task is on a remote CPU. But if both are on the same
> CPU then spinning is not working.
>
I meant to say that the supposed rt_spinlock_wait_proxy_lock() would
work similarly to the rt_mutex proxy lock, which would wait until the
boosted task (in this case, the kthread running the BH work) calls
rt_spinlock_proxy_unlock(). It would also behave like the PREEMPT_RT
version of spin_lock, where the task blocked on a spin_lock has a
special style of blocked/sleep instead of spinning on the CPU and this
is what the prefix "rt_spinlock" means.
By the way, I’m not objecting to this patch — I just want to explore
whether there might be other options.
Thanks
Lai
Powered by blists - more mailing lists