[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240309010929.1403984-4-seanjc@google.com>
Date: Fri, 8 Mar 2024 17:09:27 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson <seanjc@...gle.com>,
Lai Jiangshan <jiangshanlai@...il.com>, "Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>
Cc: kvm@...r.kernel.org, rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
Kevin Tian <kevin.tian@...el.com>, Yan Zhao <yan.y.zhao@...el.com>,
Yiwei Zhang <zzyiwei@...gle.com>
Subject: [PATCH 3/5] srcu: Add an API for a memory barrier after SRCU read lock
From: Yan Zhao <yan.y.zhao@...el.com>
To avoid redundant memory barriers, add smp_mb__after_srcu_read_lock() to
pair with smp_mb__after_srcu_read_unlock() for use in paths that need to
emit a memory barrier, but already do srcu_read_lock(), which includes a
full memory barrier. Provide an API, e.g. as opposed to having callers
document the behavior via a comment, as the full memory barrier provided
by srcu_read_lock() is an implementation detail that shouldn't bleed into
random subsystems.
KVM will use smp_mb__after_srcu_read_lock() in it's VM-Exit path to ensure
a memory barrier is emitted, which is necessary to ensure correctness of
mixed memory types on CPUs that support self-snoop.
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>
Cc: Kevin Tian <kevin.tian@...el.com>
Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
[sean: massage changelog]
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
include/linux/srcu.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 236610e4a8fa..1cb4527076de 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -343,6 +343,20 @@ static inline void smp_mb__after_srcu_read_unlock(void)
/* __srcu_read_unlock has smp_mb() internally so nothing to do here. */
}
+/**
+ * smp_mb__after_srcu_read_lock - ensure full ordering after srcu_read_lock
+ *
+ * Converts the preceding srcu_read_lock into a two-way memory barrier.
+ *
+ * Call this after srcu_read_lock, to guarantee that all memory operations
+ * that occur after smp_mb__after_srcu_read_lock will appear to happen after
+ * the preceding srcu_read_lock.
+ */
+static inline void smp_mb__after_srcu_read_lock(void)
+{
+ /* __srcu_read_lock has smp_mb() internally so nothing to do here. */
+}
+
DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct,
_T->idx = srcu_read_lock(_T->lock),
srcu_read_unlock(_T->lock, _T->idx),
--
2.44.0.278.ge034bb2e1d-goog
Powered by blists - more mailing lists