[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241122174416.1367052-3-surenb@google.com>
Date: Fri, 22 Nov 2024 09:44:16 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: peterz@...radead.org, andrii@...nel.org, jannh@...gle.com,
Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, vbabka@...e.cz,
mhocko@...nel.org, shakeel.butt@...ux.dev, hannes@...xchg.org,
david@...hat.com, willy@...radead.org, brauner@...nel.org, oleg@...hat.com,
arnd@...db.de, richard.weiyang@...il.com, zhangpeng.00@...edance.com,
linmiaohe@...wei.com, viro@...iv.linux.org.uk, hca@...ux.ibm.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, surenb@...gle.com,
"Liam R. Howlett" <Liam.Howlett@...cle.com>
Subject: [PATCH v3 3/3] mm: introduce mmap_lock_speculate_{try_begin|retry}
Add helper functions to speculatively perform operations without
read-locking mmap_lock, expecting that mmap_lock will not be
write-locked and mm is not modified from under us.
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@...cle.com>
---
Changes since v2 [1]
- Added SOB, per Liam Howlett
[1] https://lore.kernel.org/all/20241121162826.987947-3-surenb@google.com/
include/linux/mmap_lock.h | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 9715326f5a85..8ac3041df053 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -71,6 +71,7 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm)
}
#ifdef CONFIG_PER_VMA_LOCK
+
static inline void mm_lock_seqcount_init(struct mm_struct *mm)
{
seqcount_init(&mm->mm_lock_seq);
@@ -87,11 +88,39 @@ static inline void mm_lock_seqcount_end(struct mm_struct *mm)
do_raw_write_seqcount_end(&mm->mm_lock_seq);
}
-#else
+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
+{
+ /*
+ * Since mmap_lock is a sleeping lock, and waiting for it to become
+ * unlocked is more or less equivalent with taking it ourselves, don't
+ * bother with the speculative path if mmap_lock is already write-locked
+ * and take the slow path, which takes the lock.
+ */
+ return raw_seqcount_try_begin(&mm->mm_lock_seq, *seq);
+}
+
+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
+{
+ return do_read_seqcount_retry(&mm->mm_lock_seq, seq);
+}
+
+#else /* CONFIG_PER_VMA_LOCK */
+
static inline void mm_lock_seqcount_init(struct mm_struct *mm) {}
static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {}
static inline void mm_lock_seqcount_end(struct mm_struct *mm) {}
-#endif
+
+static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
+{
+ return false;
+}
+
+static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
+{
+ return true;
+}
+
+#endif /* CONFIG_PER_VMA_LOCK */
static inline void mmap_init_lock(struct mm_struct *mm)
{
--
2.47.0.371.ga323438b13-goog
Powered by blists - more mailing lists