[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <173313805013.412.5462710038573020957.tip-bot2@tip-bot2>
Date: Mon, 02 Dec 2024 11:14:10 -0000
From: "tip-bot2 for Suren Baghdasaryan" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Suren Baghdasaryan <surenb@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: perf/core] mm: introduce mmap_lock_speculate_{try_begin|retry}
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 03a001b156d2da186a5618de242750d06bf81e2d
Gitweb: https://git.kernel.org/tip/03a001b156d2da186a5618de242750d06bf81e2d
Author: Suren Baghdasaryan <surenb@...gle.com>
AuthorDate: Fri, 22 Nov 2024 09:44:16 -08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 02 Dec 2024 12:01:38 +01:00
mm: introduce mmap_lock_speculate_{try_begin|retry}
Add helper functions to speculatively perform operations without
read-locking mmap_lock, expecting that mmap_lock will not be
write-locked and mm is not modified from under us.
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Liam R. Howlett <Liam.Howlett@...cle.com>
Link: https://lkml.kernel.org/r/20241122174416.1367052-3-surenb@google.com
---
include/linux/mmap_lock.h | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 9715326..45a21fa 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -71,6 +71,7 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm)
}
#ifdef CONFIG_PER_VMA_LOCK
+
static inline void mm_lock_seqcount_init(struct mm_struct *mm)
{
seqcount_init(&mm->mm_lock_seq);
@@ -87,11 +88,39 @@ static inline void mm_lock_seqcount_end(struct mm_struct *mm)
do_raw_write_seqcount_end(&mm->mm_lock_seq);
}
-#else
+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
+{
+ /*
+ * Since mmap_lock is a sleeping lock, and waiting for it to become
+ * unlocked is more or less equivalent with taking it ourselves, don't
+ * bother with the speculative path if mmap_lock is already write-locked
+ * and take the slow path, which takes the lock.
+ */
+ return raw_seqcount_try_begin(&mm->mm_lock_seq, *seq);
+}
+
+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
+{
+ return read_seqcount_retry(&mm->mm_lock_seq, seq);
+}
+
+#else /* CONFIG_PER_VMA_LOCK */
+
static inline void mm_lock_seqcount_init(struct mm_struct *mm) {}
static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {}
static inline void mm_lock_seqcount_end(struct mm_struct *mm) {}
-#endif
+
+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
+{
+ return false;
+}
+
+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
+{
+ return true;
+}
+
+#endif /* CONFIG_PER_VMA_LOCK */
static inline void mmap_init_lock(struct mm_struct *mm)
{
Powered by blists - more mailing lists