[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pliv726u.fsf@kernel.org>
Date: Wed, 05 Mar 2025 19:38:33 +0100
From: Andreas Hindborg <a.hindborg@...nel.org>
To: "Ralf Jung" <post@...fj.de>
Cc: "Boqun Feng" <boqun.feng@...il.com>, "comex" <comexk@...il.com>,
"Alice Ryhl" <aliceryhl@...gle.com>, "Daniel Almeida"
<daniel.almeida@...labora.com>, "Benno Lossin" <benno.lossin@...ton.me>,
"Abdiel Janulgue" <abdiel.janulgue@...il.com>, <dakr@...nel.org>,
<robin.murphy@....com>, <rust-for-linux@...r.kernel.org>, "Miguel
Ojeda" <ojeda@...nel.org>, "Alex Gaynor" <alex.gaynor@...il.com>, "Gary
Guo" <gary@...yguo.net>, Björn Roy Baron
<bjorn3_gh@...tonmail.com>,
"Trevor Gross" <tmgross@...ch.edu>, "Valentin Obst"
<kernel@...entinobst.de>, <linux-kernel@...r.kernel.org>, "Christoph
Hellwig" <hch@....de>, "Marek Szyprowski" <m.szyprowski@...sung.com>,
<airlied@...hat.com>, <iommu@...ts.linux.dev>, <lkmm@...ts.linux.dev>
Subject: Re: Allow data races on some read/write operations
"Ralf Jung" <post@...fj.de> writes:
> Hi,
>
> On 05.03.25 04:24, Boqun Feng wrote:
>> On Tue, Mar 04, 2025 at 12:18:28PM -0800, comex wrote:
>>>
>>>> On Mar 4, 2025, at 11:03 AM, Ralf Jung <post@...fj.de> wrote:
>>>>
>>>> Those already exist in Rust, albeit only unstably:
>>>> <https://doc.rust-lang.org/nightly/std/intrinsics/fn.volatile_copy_memory.html>.
>>>> However, I am not sure how you'd even generate such a call in C? The
>>>> standard memcpy function is not doing volatile accesses, to my
>>>> knowledge.
>>>
>>> The actual memcpy symbol that exists at runtime is written in
>>> assembly, and should be valid to treat as performing volatile
>>> accesses.
>
> memcpy is often written in C... and AFAIK compilers understand what that
> function does and will, for instance, happily eliminate the call if they can
> prove that the destination memory is not being read from again. So, it doesn't
> behave like a volatile access at all.
>
>>> But both GCC and Clang special-case the memcpy function. For example,
>>> if you call memcpy with a small constant as the size, the optimizer
>>> will transform the call into one or more regular loads/stores, which
>>> can then be optimized mostly like any other loads/stores (except for
>>> opting out of alignment and type-based aliasing assumptions). Even if
>>> the call isn’t transformed, the optimizer will still make assumptions.
>>> LLVM will automatically mark memcpy `nosync`, which makes it undefined
>>> behavior if the function “communicate[s] (synchronize[s]) with another
>>> thread”, including through “volatile accesses”. [1]
>
> The question is more, what do clang and GCC document / guarantee in a stable
> way regarding memcpy? I have not seen any indication so far that a memcpy call
> would ever be considered volatile, so we have to treat it like a non-volatile
> non-atomic operation.
>
>>> However, these optimizations should rarely trigger misbehavior in
>>> practice, so I wouldn’t be surprised if Linux had some code that
>>> expected memcpy to act volatile…
>>>
>>
>> Also in this particular case we are discussing [1], it's a memcpy (from
>> or to) a DMA buffer, which means the device can also read or write the
>> memory, therefore the content of the memory may be altered outside the
>> program (the kernel), so we cannot use copy_nonoverlapping() I believe.
>>
>> [1]: https://lore.kernel.org/rust-for-linux/87bjuil15w.fsf@kernel.org/
>
> Is there actually a potential for races (with reads by hardware, not other
> threads) on the memcpy'd memory?
There is another use case for this: copying data to/from a page that is
mapped into user space. In this case, a user space process can
potentially modify the data in the mapped page while we are
reading/writing that data. This would be a misbehaved user space
process, but it should not be able to cause UB in the kernel anyway.
The C kernel just calls memcpy directly for this use case.
For this use case, we do not interpret or make control flow decisions
based on the data we read/write. And _if_ user space decides to do
concurrent writes to the page, we don't care if the data becomes
garbage. We just need the UB to be confined to the data moved from that
page, and not leak into the rest of the kernel.
Best regards,
Andreas Hindborg
Powered by blists - more mailing lists