[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190726212124.302995288@linutronix.de>
Date: Fri, 26 Jul 2019 23:19:39 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: x86@...nel.org, Steven Rostedt <rostedt@...dmis.org>,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: [patch 3/8] locking: Use CONFIG_PREEMPTION
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by
CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same
functionality which today depends on CONFIG_PREEMPT.
Adjust the comments in the locking code.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
include/linux/spinlock.h | 2 +-
include/linux/spinlock_api_smp.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -214,7 +214,7 @@ static inline void do_raw_spin_unlock(ra
/*
* Define the various spin_lock methods. Note we define these
- * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The
+ * regardless of whether CONFIG_SMP or CONFIG_PREEMPTION are set. The
* various methods are defined as nops in the case they are not
* required.
*/
--- a/include/linux/spinlock_api_smp.h
+++ b/include/linux/spinlock_api_smp.h
@@ -96,7 +96,7 @@ static inline int __raw_spin_trylock(raw
/*
* If lockdep is enabled then we use the non-preemption spin-ops
- * even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
+ * even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
*/
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
Powered by blists - more mailing lists