[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DFTKIA3DYRAV.18HDP8UCNC8NM@garyguo.net>
Date: Tue, 20 Jan 2026 16:47:00 +0000
From: "Gary Guo" <gary@...yguo.net>
To: "Marco Elver" <elver@...gle.com>, "Boqun Feng" <boqun.feng@...il.com>
Cc: <linux-kernel@...r.kernel.org>, <rust-for-linux@...r.kernel.org>,
<linux-fsdevel@...r.kernel.org>, <kasan-dev@...glegroups.com>, "Will
Deacon" <will@...nel.org>, "Peter Zijlstra" <peterz@...radead.org>, "Mark
Rutland" <mark.rutland@....com>, "Gary Guo" <gary@...yguo.net>, "Miguel
Ojeda" <ojeda@...nel.org>, Björn Roy Baron
<bjorn3_gh@...tonmail.com>, "Benno Lossin" <lossin@...nel.org>, "Andreas
Hindborg" <a.hindborg@...nel.org>, "Alice Ryhl" <aliceryhl@...gle.com>,
"Trevor Gross" <tmgross@...ch.edu>, "Danilo Krummrich" <dakr@...nel.org>,
"Elle Rhumsaa" <elle@...thered-steel.dev>, "Paul E. McKenney"
<paulmck@...nel.org>, "FUJITA Tomonori" <fujita.tomonori@...il.com>
Subject: Re: [PATCH 2/2] rust: sync: atomic: Add atomic operation helpers
over raw pointers
On Tue Jan 20, 2026 at 4:23 PM GMT, Marco Elver wrote:
> On Tue, Jan 20, 2026 at 07:52PM +0800, Boqun Feng wrote:
>> In order to synchronize with C or external, atomic operations over raw
>> pointers, althought previously there is always an `Atomic::from_ptr()`
>> to provide a `&Atomic<T>`. However it's more convenient to have helpers
>> that directly perform atomic operations on raw pointers. Hence a few are
>> added, which are basically a `Atomic::from_ptr().op()` wrapper.
>>
>> Note: for naming, since `atomic_xchg()` and `atomic_cmpxchg()` has a
>> conflict naming to 32bit C atomic xchg/cmpxchg, hence they are just
>> named as `xchg()` and `cmpxchg()`. For `atomic_load()` and
>> `atomic_store()`, their 32bit C counterparts are `atomic_read()` and
>> `atomic_set()`, so keep the `atomic_` prefix.
>>
>> Signed-off-by: Boqun Feng <boqun.feng@...il.com>
>> ---
>> rust/kernel/sync/atomic.rs | 104 +++++++++++++++++++++++++++
>> rust/kernel/sync/atomic/predefine.rs | 46 ++++++++++++
>> 2 files changed, 150 insertions(+)
>>
>> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
>> index d49ee45c6eb7..6c46335bdb8c 100644
>> --- a/rust/kernel/sync/atomic.rs
>> +++ b/rust/kernel/sync/atomic.rs
>> @@ -611,3 +611,107 @@ pub fn cmpxchg<Ordering: ordering::Ordering>(
>> }
>> }
>> }
>> +
>> +/// Atomic load over raw pointers.
>> +///
>> +/// This function provides a short-cut of `Atomic::from_ptr().load(..)`, and can be used to work
>> +/// with C side on synchronizations:
>> +///
>> +/// - `atomic_load(.., Relaxed)` maps to `READ_ONCE()` when using for inter-thread communication.
>> +/// - `atomic_load(.., Acquire)` maps to `smp_load_acquire()`.
>
> I'm late to the party and may have missed some discussion, but it might
> want restating in the documentation and/or commit log:
>
> READ_ONCE is meant to be a dependency-ordering primitive, i.e. be more
> like memory_order_consume than it is memory_order_relaxed. This has, to
> the best of my knowledge, not changed; otherwise lots of kernel code
> would be broken.
On the Rust-side documentation we mentioned that `Relaxed` always preserve
dependency ordering, so yes, it is closer to `consume` in the C11 model.
> It is known to be brittle [1]. So the recommendation
> above is unsound; well, it's as unsound as implementing READ_ONCE with a
> volatile load.
Sorry, which part of this is unsound? You mean that the dependency ordering is
actually lost when it's not supposed to be? Even so, it'll be only a problem on
specific users that uses `Relaxed` to carry ordering?
Users that use `Relaxed` for things that don't require any ordering would still
be fine?
>
> While Alice's series tried to expose READ_ONCE as-is to the Rust side
> (via volatile), so that Rust inherits the exact same semantics (including
> its implementation flaw), the recommendation above is doubling down on
> the unsoundness by proposing Relaxed to map to READ_ONCE.
>
> [1] https://lpc.events/event/16/contributions/1174/attachments/1108/2121/Status%20Report%20-%20Broken%20Dependency%20Orderings%20in%20the%20Linux%20Kernel.pdf
>
I think this is a longstanding debate on whether we should actually depend on
dependency ordering or just upgrade everything needs it to acquire. But this
isn't really specific to Rust, and whatever is decided is global to the full
LKMM.
> Furthermore, LTO arm64 promotes READ_ONCE to an acquire (see
> arch/arm64/include/asm/rwonce.h):
>
> /*
> * When building with LTO, there is an increased risk of the compiler
> * converting an address dependency headed by a READ_ONCE() invocation
> * into a control dependency and consequently allowing for harmful
> * reordering by the CPU.
> *
> * Ensure that such transformations are harmless by overriding the generic
> * READ_ONCE() definition with one that provides RCpc acquire semantics
> * when building with LTO.
> */
>
> So for all intents and purposes, the only sound mapping when pairing
> READ_ONCE() with an atomic load on the Rust side is to use Acquire
> ordering.
LLVM handles address dependency much saner than GCC does. It for example won't
turn address comparing equal into meaning that the pointer can be interchanged
(as provenance won't match). Currently only address comparision to NULL or
static can have effect on pointer provenance.
Although, last time I asked if we can rely on this for address dependency, I
didn't get an affirmitive answer -- but I think in practice it won't be lost (as
currently implemented).
Furthermore, Rust code currently does not participate in LTO.
Best,
Gary
Powered by blists - more mailing lists