[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f4717b3-268f-8db3-e380-4af0a5479901@huaweicloud.com>
Date: Mon, 30 Jan 2023 13:23:28 +0100
From: Jonas Oberhauser <jonas.oberhauser@...weicloud.com>
To: Boqun Feng <boqun.feng@...il.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Jules Maselbas <jmaselbas@...ray.eu>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Arnd Bergmann <arnd@...db.de>, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <parri.andrea@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
"Paul E. McKenney" <paulmck@...nel.org>,
Akira Yokosawa <akiyks@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
Joel Fernandes <joel@...lfernandes.org>,
Hernan Ponce de Leon <hernan.poncedeleon@...weicloud.com>,
Paul Heidekrüger <paul.heidekrueger@...tum.de>,
Marco Elver <elver@...gle.com>,
Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Wedson Almeida Filho <wedsonaf@...il.com>,
Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>
Subject: Re: [PATCH] locking/atomic: atomic: Use arch_atomic_{read,set} in
generic atomic ops
On 1/27/2023 11:09 PM, Boqun Feng wrote:
> On Fri, Jan 27, 2023 at 03:34:33PM +0100, Peter Zijlstra wrote:
>>> I also noticed that GCC has some builtin/extension to do such things,
>>> __atomic_OP_fetch and __atomic_fetch_OP, but I do not know if this
>>> can be used in the kernel.
>> On a per-architecture basis only, the C/C++ memory model does not match
>> the Linux Kernel memory model so using the compiler to generate the
>> atomic ops is somewhat tricky and needs architecture audits.
> Hijack this thread a little bit, but while we are at it, do you think it
> makes sense that we have a config option that allows archs to
> implement LKMM atomics via C11 (volatile) atomics? I know there are gaps
> between two memory models, but the option is only for fallback/generic
> implementation so we can put extra barriers/orderings to make things
> guaranteed to work.
>
> It'll be a code version of this document:
>
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0124r7.html
>
> (although I realise there may be a few mistakes in that doc since I
> wasn't familiar with C11 memory model when I wrote part of the doc, but
> these can be fixed)
>
> Another reason I ask is that since Rust is coming, we need to provide
> our LKMM atomics in Rust so that C code and Rust code can talk via same
> atomic variables, since both sides need to use the same memory model.
> My choices are:
>
> 1. Using FFI to call Linux atomic APIs: not inline therefore not
> efficient.
>
> 2. Implementing Rust LKMM atomics in asm: much more work although
> I'm OK if we have to do it.
>
> 3. Implementing Rust LKMM atomics with standard atomics (i.e. C/C++
> atomics):
>
> * Requires Rust has "volatile" atomics, which is WIP but
> looks promising
>
> * Less efficient compared to choice #2 but more efficient
> compared to choice #1
>
> Ideally, choice #2 is the best option for all architectures, however, if
> we have the generic implementation based on choice #3, for some archs it
> may be good enough.
>
> Thoughts?
Thanks for adding me to the discussion!
One reason not to rely on C11 is that old compilers don't support it,
and there may be application scenarios in which new compilers haven't
been certified.
I don't know if this is something that affects linux, but linux is so
big and versatile I'd be surprised if that's irrelevant.
Another is that the C11 model is more about atomic locations than atomic
accesses, and there are several places in the kernel where a location is
accessed both atomically and non-atomically. This API mismatch is more
severe than the semantic differences in my opinion, since you don't have
guarantees of what the layout of atomics is going to be.
Perhaps you could instead rely on the compiler builtins? Note that this
may invalidate some progress properties, e.g., ticket locks become
unfair if the increment (for taking a ticket) is implemented with a CAS
loop (because a thread can fail forever to get a ticket if the ticket
counter is contended, and thus starve). There may be some linux atomics
that don't map to any compiler builtins and need to implemented with
such CAS loops, potentially leading to such problems.
I'm also curious whether link time optimization can resolve the inlining
issue?
I think another big question for me is to which extent it makes sense
anyways to have shared memory concurrency between the Rust code and the
C code. It seems all the bad concurrency stuff from the C world would
flow into the Rust world, right?
If you can live without shared Rust & C concurrency, then perhaps you
can get away without using LKMM in Rust at all, and just rely on its
(C11-like) memory model internally and talk to the C code through
synchronous, safer ways.
I'm not against having a fallback builtin-based implementation of LKMM,
and I don't think that it really needs architecture audits. What it
needs is some additional compiler barriers and memory barriers, to
ensure that the arguments about dependencies and non-atomics still hold.
E.g., a release store may not just be "builtin release store" but may
need to have a compiler barrier to prevent the release store being moved
in program order. And a "full barrier" exchange may need an mb() infront
of the operation to avoid "roach motel ordering" (i.e., x=1 ; "full
barrier exchange"; y = 1 allows y=1 to execute before x=1 in the
compiler builtins as far as I remember). And there may be some other
cases like this.
But I currently don't see that this implementation would be noticeably
faster than paying the overhead of lack of inline.
Best wishes, jonas
Powered by blists - more mailing lists