[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230913155005.GA26252@redhat.com>
Date: Wed, 13 Sep 2023 17:50:05 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Boqun Feng <boqun.feng@...il.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...riel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Waiman Long <longman@...hat.com>, Will Deacon <will@...nel.org>
Cc: Alexey Gladkov <legion@...nel.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 4/5] seqlock: introduce read_seqcount_begin_or_lock() and
friends
See the comment in the patch.
NOTE: currently __seqprop_##lockname##_sequence() takes and drops s->lock
if preemptible && CONFIG_PREEMPT_RT. With the previous changes it is simple
to change this behaviour for the read_seqcount_begin_or_lock() case, iiuc
it makes more sense to return with s->lock held and "(seq & 1) == 1".
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
---
include/linux/seqlock.h | 50 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 9831683a0102..503813b3bab6 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -1239,4 +1239,54 @@ done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
if (seq & 1)
read_sequnlock_excl_irqrestore(lock, flags);
}
+
+/*
+ * Like read_seqbegin_or_lock/need_seqretry/done_seqretry above
+ * but for seqcount_LOCKNAME_t.
+ */
+
+#define read_seqcount_begin_or_lock(s, lock, seq) \
+do { \
+ if (!(*(seq) & 1)) \
+ *(seq) = read_seqcount_begin(s); \
+ else \
+ seqprop_lock((s), (lock)); \
+} while (0)
+
+#define need_seqcount_retry(s, seq) \
+({ \
+ !((seq) & 1) && read_seqcount_retry((s), (seq)); \
+})
+
+#define done_seqcount_retry(s, lock, seq) \
+do { \
+ if ((seq) & 1) \
+ seqprop_unlock((s), (lock)); \
+} while (0)
+
+
+#define read_seqcount_begin_or_lock_irqsave(s, lock, seq) \
+({ \
+ unsigned long flags = 0; \
+ \
+ if (!(*(seq) & 1)) \
+ *(seq) = read_seqcount_begin(s); \
+ else { \
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \
+ local_irq_save(flags); \
+ seqprop_lock((s), (lock)); \
+ } \
+ \
+ flags; \
+})
+
+#define done_seqcount_retry_irqrestore(s, lock, seq, flags) \
+do { \
+ if ((seq) & 1) { \
+ seqprop_unlock((s), (lock)); \
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \
+ local_irq_restore((flags)); \
+ } \
+} while (0)
+
#endif /* __LINUX_SEQLOCK_H */
--
2.25.1.362.g51ebf55
Powered by blists - more mailing lists