lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220921132152.1622616-2-longman@redhat.com>
Date:   Wed, 21 Sep 2022 09:21:51 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Steven Rostedt <rostedt@...dmis.org>
Cc:     linux-kernel@...r.kernel.org, Waiman Long <longman@...hat.com>
Subject: [PATCH 1/2] locking: Provide a low overhead do_arch_spin_lock() API

There are some code paths in the kernel like tracing or rcu where they
want to use a spinlock without the lock debugging overhead (lockdep,
etc). Provide a do_arch_spin_lock() API with proper preemption disabling
and enabling without any debugging or tracing overhead.

Signed-off-by: Waiman Long <longman@...hat.com>
---
 include/linux/spinlock.h | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 5c0c5174155d..535ef0d5bb80 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -119,6 +119,33 @@ do {									\
 #define raw_spin_is_contended(lock)	(((void)(lock), 0))
 #endif /*arch_spin_is_contended*/
 
+/*
+ * Provide a set of do_arch_spin*() APIs to make use of the arch_spinlock_t
+ * with proper preemption disabling & enabling without any debugging and
+ * tracing overhead. Any users of arch_spinlock_t should use this set of
+ * APIs unless it is sure that either preemption or irqs has been disabled.
+ */
+static __always_inline void do_arch_spin_lock(arch_spinlock_t *lock)
+{
+	preempt_disable_notrace();
+	arch_spin_lock(lock);
+}
+
+static __always_inline int do_arch_spin_trylock(arch_spinlock_t *lock)
+{
+	preempt_disable_notrace();
+	if (arch_spin_trylock(lock))
+		return 1;
+	preempt_enable_notrace();
+	return 0;
+}
+
+static __always_inline void do_arch_spin_unlock(arch_spinlock_t *lock)
+{
+	arch_spin_unlock(lock);
+	preempt_enable_notrace();
+}
+
 /*
  * smp_mb__after_spinlock() provides the equivalent of a full memory barrier
  * between program-order earlier lock acquisitions and program-order later
-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ