[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250528061139.MduTfTBS@linutronix.de>
Date: Wed, 28 May 2025 08:11:39 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Lyude Paul <lyude@...hat.com>
Cc: rust-for-linux@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org,
Daniel Almeida <daniel.almeida@...labora.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Clark Williams <clrkwllms@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
"open list:Real-time Linux (PREEMPT_RT):Keyword:PREEMPT_RT" <linux-rt-devel@...ts.linux.dev>
Subject: Re: [RFC RESEND v10 14/14] locking: Switch to
_irq_{disable,enable}() variants in cleanup guards
On 2025-05-27 18:21:55 [-0400], Lyude Paul wrote:
> diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
> index 6ea08fafa6d7b..f54e184735563 100644
> --- a/include/linux/spinlock_rt.h
> +++ b/include/linux/spinlock_rt.h
> @@ -132,6 +132,12 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock,
> rt_spin_unlock(lock);
> }
>
> +static __always_inline int spin_trylock_irq_disable(spinlock_t *lock)
> +{
> + return rt_spin_trylock(lock);
> +}
> +
> +
no extra space, please. It appears this should be part of another patch.
That patch where the spin_trylock_irq_disable() was introduced.
> #define spin_trylock(lock) \
> __cond_lock(lock, rt_spin_trylock(lock))
>
Sebastian
Powered by blists - more mailing lists