[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251120214616.14386-17-lyude@redhat.com>
Date: Thu, 20 Nov 2025 16:46:08 -0500
From: Lyude Paul <lyude@...hat.com>
To: rust-for-linux@...r.kernel.org,
linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>
Cc: Boqun Feng <boqun.feng@...il.com>,
Daniel Almeida <daniel.almeida@...labora.com>,
Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <lossin@...nel.org>,
Andreas Hindborg <a.hindborg@...nel.org>,
Alice Ryhl <aliceryhl@...gle.com>,
Trevor Gross <tmgross@...ch.edu>,
Danilo Krummrich <dakr@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>
Subject: [PATCH v14 16/16] locking: Switch to _irq_{disable,enable}() variants in cleanup guards
From: Boqun Feng <boqun.feng@...il.com>
The semantics of various irq disabling guards match what
*_irq_{disable,enable}() provide, i.e. the interrupt disabling is
properly nested, therefore it's OK to switch to use
*_irq_{disable,enable}() primitives.
Signed-off-by: Boqun Feng <boqun.feng@...il.com>
---
V10:
* Add PREEMPT_RT build fix from Guangbo Cui
include/linux/spinlock.h | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index bbbee61c6f5df..72bb6ae5319c7 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -568,10 +568,10 @@ DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t,
raw_spin_unlock(_T->lock))
DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t,
- raw_spin_lock_irq(_T->lock),
- raw_spin_unlock_irq(_T->lock))
+ raw_spin_lock_irq_disable(_T->lock),
+ raw_spin_unlock_irq_enable(_T->lock))
-DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->lock))
+DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq_disable(_T->lock))
DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t,
raw_spin_lock_bh(_T->lock),
@@ -580,12 +580,11 @@ DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t,
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lock))
DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t,
- raw_spin_lock_irqsave(_T->lock, _T->flags),
- raw_spin_unlock_irqrestore(_T->lock, _T->flags),
- unsigned long flags)
+ raw_spin_lock_irq_disable(_T->lock),
+ raw_spin_unlock_irq_enable(_T->lock))
DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try,
- raw_spin_trylock_irqsave(_T->lock, _T->flags))
+ raw_spin_trylock_irq_disable(_T->lock))
DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
spin_lock(_T->lock),
@@ -594,11 +593,11 @@ DEFINE_LOCK_GUARD_1(spinlock, spinlock_t,
DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock))
DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t,
- spin_lock_irq(_T->lock),
- spin_unlock_irq(_T->lock))
+ spin_lock_irq_disable(_T->lock),
+ spin_unlock_irq_enable(_T->lock))
DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try,
- spin_trylock_irq(_T->lock))
+ spin_trylock_irq_disable(_T->lock))
DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t,
spin_lock_bh(_T->lock),
@@ -608,12 +607,11 @@ DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try,
spin_trylock_bh(_T->lock))
DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t,
- spin_lock_irqsave(_T->lock, _T->flags),
- spin_unlock_irqrestore(_T->lock, _T->flags),
- unsigned long flags)
+ spin_lock_irq_disable(_T->lock),
+ spin_unlock_irq_enable(_T->lock))
DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try,
- spin_trylock_irqsave(_T->lock, _T->flags))
+ spin_trylock_irq_disable(_T->lock))
DEFINE_LOCK_GUARD_1(read_lock, rwlock_t,
read_lock(_T->lock),
--
2.51.1
Powered by blists - more mailing lists