[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c545705f-ee7e-4442-ebfc-64a3baca2836@marcan.st>
Date: Tue, 16 Aug 2022 23:30:45 +0900
From: Hector Martin <marcan@...can.st>
To: Will Deacon <will@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Arnd Bergmann <arnd@...db.de>, Ingo Molnar <mingo@...nel.org>,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <parri.andrea@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
"Paul E. McKenney" <paulmck@...nel.org>,
Akira Yokosawa <akiyks@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
Joel Fernandes <joel@...lfernandes.org>,
Mark Rutland <mark.rutland@....com>,
Jonathan Corbet <corbet@....net>, Tejun Heo <tj@...nel.org>,
jirislaby@...nel.org, Marc Zyngier <maz@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Oliver Neukum <oneukum@...e.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
linux-doc@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
Asahi Linux <asahi@...ts.linux.dev>, stable@...r.kernel.org
Subject: Re: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure
On 16/08/2022 23.04, Will Deacon wrote:
>> diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
>> index 093cdaefdb37..d8b101c97031 100644
>> --- a/Documentation/atomic_bitops.txt
>> +++ b/Documentation/atomic_bitops.txt
>> @@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
>> - RMW operations that have a return value are fully ordered.
>>
>> - RMW operations that are conditional are unordered on FAILURE,
>> - otherwise the above rules apply. In the case of test_and_{}_bit() operations,
>> + otherwise the above rules apply. In the case of test_and_set_bit_lock(),
>> if the bit in memory is unchanged by the operation then it is deemed to have
>> failed.
>
> The next sentence is:
>
> | Except for a successful test_and_set_bit_lock() which has ACQUIRE
> | semantics and clear_bit_unlock() which has RELEASE semantics.
>
> so I think it reads a bit strangely now. How about something like:
>
>
> diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
> index 093cdaefdb37..3b516729ec81 100644
> --- a/Documentation/atomic_bitops.txt
> +++ b/Documentation/atomic_bitops.txt
> @@ -59,12 +59,15 @@ Like with atomic_t, the rule of thumb is:
> - RMW operations that have a return value are fully ordered.
>
> - RMW operations that are conditional are unordered on FAILURE,
> - otherwise the above rules apply. In the case of test_and_{}_bit() operations,
> - if the bit in memory is unchanged by the operation then it is deemed to have
> - failed.
> + otherwise the above rules apply. For the purposes of ordering, the
> + test_and_{}_bit() operations are treated as unconditional.
>
> -Except for a successful test_and_set_bit_lock() which has ACQUIRE semantics and
> -clear_bit_unlock() which has RELEASE semantics.
> +Except for:
> +
> + - test_and_set_bit_lock() which has ACQUIRE semantics on success and is
> + unordered on failure;
> +
> + - clear_bit_unlock() which has RELEASE semantics.
>
> Since a platform only has a single means of achieving atomic operations
> the same barriers as for atomic_t are used, see atomic_t.txt.
Makes sense! I'll send a v2 with that in a couple of days if nothing
else comes up.
>> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
>> index 3096f086b5a3..71ab4ba9c25d 100644
>> --- a/include/asm-generic/bitops/atomic.h
>> +++ b/include/asm-generic/bitops/atomic.h
>> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
>> unsigned long mask = BIT_MASK(nr);
>>
>> p += BIT_WORD(nr);
>> - if (READ_ONCE(*p) & mask)
>> - return 1;
>> -
>> old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
>> return !!(old & mask);
>> }
>> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
>> unsigned long mask = BIT_MASK(nr);
>>
>> p += BIT_WORD(nr);
>> - if (!(READ_ONCE(*p) & mask))
>> - return 0;
>> -
>> old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
>> return !!(old & mask);
>
> I suppose one sad thing about this is that, on arm64, we could reasonably
> keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return
> but I don't think we can express that in the Linux memory model so we
> end up in RmW territory every time.
You'd need a barrier *before* the READ_ONCE(), since what we're trying
to prevent is a consumer from writing to the value without being able to
observe the writes that happened prior, while this side read the old
value. A barrier after the READ_ONCE() doesn't do anything, as that read
is the last memory operation in this thread (of the problematic sequence).
At that point, I'm not sure DMB LD / early read / LSE atomic would be
any faster than just always doing the LSE atomic?
- Hector
Powered by blists - more mailing lists