[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08423acb-1f8e-407a-80fe-c424c6ee2861@efficios.com>
Date: Fri, 27 Sep 2024 10:43:59 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Boqun Feng <boqun.feng@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Jonas Oberhauser <jonas.oberhauser@...weicloud.com>,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org, linux-mm@...ck.org,
lkmm@...ts.linux.dev, "Paul E. McKenney" <paulmck@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>, rostedt <rostedt@...dmis.org>,
Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Mark Rutland <mark.rutland@....com>, Thomas Gleixner <tglx@...utronix.de>,
Kent Overstreet <kent.overstreet@...il.com>, Vlastimil Babka
<vbabka@...e.cz>, maged.michael@...il.com,
Neeraj Upadhyay <neeraj.upadhyay@....com>
Subject: Re: [RFC PATCH 1/4] hazptr: Add initial implementation of hazard
pointers
On 2024-09-27 12:59, Mathieu Desnoyers wrote:
> On 2024-09-27 06:28, Boqun Feng wrote:
[...]
>> I replaced ADDRESS_EQ(a, b) with ADDRESS_EQ(b, a), and the compile
>> result shows it can prevent the issue:
>
> I see, yes. It prevents the issue by making the compiler create
> a copy of the value "modified" by the asm before doing the equality
> comparison.
>
> This means the compiler cannot derive the value for b from the first
> load when b is used after after the equality comparison.
>
> The only downside of OPTIMIZER_HIDE_VAR() is that it adds an extra
> "mov" instruction to move the content across registers. I don't think
> it matters performance wise though, so that solution is appealing
> because it is arch-agnostic.
>
> One small improvement over your proposed solution would be to apply
> OPTIMIZER_HIDE_VAR() on both inputs. Because this is not a volatile
> asm, it is simply optimized away if var1 or var2 is unused following
> the equality comparison. It is more convenient to prevent replacement
> of both addresses being compared by the other rather than providing
> the guarantee only on a single parameter:
Actually, your approach is better (only preserving the address
dependency on the first parameter), because it allows the second
parameter to be a constant.
Here is a diff. Please let me know if I need to improve anything wrt
comments or implementation:
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 2df665fa2964..52434eccd715 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -186,6 +186,32 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
__asm__ ("" : "=r" (var) : "0" (var))
#endif
+/*
+ * Compare an address with an expression while preserving the address
+ * dependencies for later use of the address. It should be used when
+ * comparing an address returned by rcu_dereference() with another
+ * address (either constant or in registers).
+ *
+ * This is needed to prevent the compiler SSA GVN optimization pass from
+ * replacing the register holding @addr by @expr (either a constant or a
+ * register) based on their equality, which does not preserve address
+ * dependencies and allows the following misordering speculations:
+ *
+ * - If @expr is a constant, the compiler can issue the loads which depend
+ * on @addr before the load of @addr.
+ * - If @expr is a register populated by a prior load, weakly-ordered
+ * CPUs can speculate loads which depend on @addr before the load of the
+ * address they depend on.
+ */
+#ifndef ADDRESS_EQ
+#define ADDRESS_EQ(addr, expr) \
+ ({ \
+ bool __res = (addr) == (expr); \
+ OPTIMIZER_HIDE_VAR(addr); \
+ __res; \
+ })
+#endif
+
#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
/**
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
Powered by blists - more mailing lists