[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fbwwitktndx6vpkyhp5znkxmdfpforylvcmimyewel6mett2cw@i5yxaracpso2>
Date: Fri, 28 Feb 2025 10:41:12 -0500
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Ralf Jung <post@...fj.de>
Cc: David Laight <david.laight.linux@...il.com>,
Steven Rostedt <rostedt@...dmis.org>, Linus Torvalds <torvalds@...ux-foundation.org>,
Martin Uecker <uecker@...raz.at>, "Paul E. McKenney" <paulmck@...nel.org>,
Alice Ryhl <aliceryhl@...gle.com>, Ventura Jack <venturajack85@...il.com>,
Gary Guo <gary@...yguo.net>, airlied@...il.com, boqun.feng@...il.com, ej@...i.de,
gregkh@...uxfoundation.org, hch@...radead.org, hpa@...or.com, ksummit@...ts.linux.dev,
linux-kernel@...r.kernel.org, miguel.ojeda.sandonis@...il.com, rust-for-linux@...r.kernel.org
Subject: Re: C aggregate passing (Rust kernel policy)
On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> Hi,
>
> > > I guess you can sum this up to:
> > >
> > > The compiler should never assume it's safe to read a global more than the
> > > code specifies, but if the code reads a global more than once, it's fine
> > > to cache the multiple reads.
> > >
> > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > And when I do use it, it is more to prevent write tearing as you mentioned.
> >
> > Except that (IIRC) it is actually valid for the compiler to write something
> > entirely unrelated to a memory location before writing the expected value.
> > (eg use it instead of stack for a register spill+reload.)
> > Not gcc doesn't do that - but the standard lets it do it.
>
> Whether the compiler is permitted to do that depends heavily on what exactly
> the code looks like, so it's hard to discuss this in the abstract.
> If inside some function, *all* writes to a given location are atomic (I
> think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> to invent any new writes to that memory. The compiler has to assume that
> there might be concurrent reads from other threads, whose behavior could
> change from the extra compiler-introduced writes. The spec (in C, C++, and
> Rust) already works like that.
>
> OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> val;" or memcpy or so), that is a signal to the compiler that there cannot
> be any concurrent accesses happening at the moment, and therefore it can
> (and likely will) introduce extra writes to that memory.
Is that how it really works?
I'd expect the atomic writes to have what we call "compiler barriers"
before and after; IOW, the compiler can do whatever it wants with non
atomic writes, provided it doesn't cross those barriers.
Powered by blists - more mailing lists