[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210413024703.2745636-1-ying.huang@intel.com>
Date: Tue, 13 Apr 2021 10:47:03 +0800
From: Huang Ying <ying.huang@...el.com>
To: linux-kernel@...r.kernel.org
Cc: Huang Ying <ying.huang@...el.com>, Tejun Heo <tj@...nel.org>,
Kent Overstreet <kent.overstreet@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Roman Gushchin <guro@...com>, Ming Lei <ming.lei@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
Miaohe Lin <linmiaohe@...wei.com>
Subject: [RFC PATCH] percpu_ref: Make percpu_ref_tryget*() ACQUIRE operations
One typical use case of percpu_ref_tryget() family functions is as
follows,
if (percpu_ref_tryget(&p->ref)) {
/* Operate on the other fields of *p */
}
The refcount needs to be checked before operating on the other fields
of the data structure (*p), otherwise, the values gotten from the
other fields may be invalid or inconsistent. To guarantee the correct
memory ordering, percpu_ref_tryget*() needs to be the ACQUIRE
operations.
This function implements that via using smp_load_acquire() in
__ref_is_percpu() to read the percpu pointer.
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Tejun Heo <tj@...nel.org>
Cc: Kent Overstreet <kent.overstreet@...il.com>
Cc: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Roman Gushchin <guro@...com>
Cc: Ming Lei <ming.lei@...hat.com>
Cc: Al Viro <viro@...iv.linux.org.uk>
Cc: Miaohe Lin <linmiaohe@...wei.com>
---
include/linux/percpu-refcount.h | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 16c35a728b4c..9838f7ea4bf1 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -165,13 +165,13 @@ static inline bool __ref_is_percpu(struct percpu_ref *ref,
* !__PERCPU_REF_ATOMIC, which may be set asynchronously, and then
* used as a pointer. If the compiler generates a separate fetch
* when using it as a pointer, __PERCPU_REF_ATOMIC may be set in
- * between contaminating the pointer value, meaning that
- * READ_ONCE() is required when fetching it.
+ * between contaminating the pointer value, smp_load_acquire()
+ * will prevent this.
*
- * The dependency ordering from the READ_ONCE() pairs
+ * The dependency ordering from the smp_load_acquire() pairs
* with smp_store_release() in __percpu_ref_switch_to_percpu().
*/
- percpu_ptr = READ_ONCE(ref->percpu_count_ptr);
+ percpu_ptr = smp_load_acquire(&ref->percpu_count_ptr);
/*
* Theoretically, the following could test just ATOMIC; however,
@@ -231,6 +231,9 @@ static inline void percpu_ref_get(struct percpu_ref *ref)
* Returns %true on success; %false on failure.
*
* This function is safe to call as long as @ref is between init and exit.
+ *
+ * This function is an ACQUIRE operation, that is, all memory operations
+ * after will appear to happen after checking the refcount.
*/
static inline bool percpu_ref_tryget_many(struct percpu_ref *ref,
unsigned long nr)
@@ -260,6 +263,9 @@ static inline bool percpu_ref_tryget_many(struct percpu_ref *ref,
* Returns %true on success; %false on failure.
*
* This function is safe to call as long as @ref is between init and exit.
+ *
+ * This function is an ACQUIRE operation, that is, all memory operations
+ * after will appear to happen after checking the refcount.
*/
static inline bool percpu_ref_tryget(struct percpu_ref *ref)
{
@@ -280,6 +286,9 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref)
* percpu_ref_tryget_live().
*
* This function is safe to call as long as @ref is between init and exit.
+ *
+ * This function is an ACQUIRE operation, that is, all memory operations
+ * after will appear to happen after checking the refcount.
*/
static inline bool percpu_ref_tryget_live(struct percpu_ref *ref)
{
--
2.30.2
Powered by blists - more mailing lists