[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z9Ia3AMqFpNj6fUb@thinkpad>
Date: Wed, 12 Mar 2025 19:38:04 -0400
From: Yury Norov <yury.norov@...il.com>
To: Ignacio Encinas <ignacio@...cinas.com>
Cc: Rasmus Villemoes <linux@...musvillemoes.dk>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
linux-kernel-mentees@...ts.linux.dev, skhan@...uxfoundation.org,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] riscv: fix test_and_{set,clear}_bit ordering
documentation
On Tue, Mar 11, 2025 at 06:20:22PM +0100, Ignacio Encinas wrote:
> test_and_{set,clear}_bit are fully ordered as specified in
> Documentation/atomic_bitops.txt. Fix incorrect comment stating otherwise.
>
> Note that the implementation is correct since commit
> 9347ce54cd69 ("RISC-V: __test_and_op_bit_ord should be strongly ordered")
> was introduced.
>
> Signed-off-by: Ignacio Encinas <ignacio@...cinas.com>
Applied in bitmap-for-next.
Thanks,
Yury
> ---
> This seems to be a leftover comment from the initial implementation
> which assumed these operations were relaxed.
>
> Documentation/atomic_bitops.txt states:
>
> [...]
> RMW atomic operations with return value:
>
> test_and_{set,clear,change}_bit()
> test_and_set_bit_lock()
> [...]
>
> - RMW operations that have a return value are fully ordered.
>
> Similar comments can be found in
> include/asm-generic/bitops/instrumented-atomic.h,
> include/linux/atomic/atomic-long.h, etc...
> ---
> arch/riscv/include/asm/bitops.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
> index c6bd3d8354a96b4e7bbef0e98a201da412301b57..49a0f48d93df5be4d38fe25b437378467e4ca433 100644
> --- a/arch/riscv/include/asm/bitops.h
> +++ b/arch/riscv/include/asm/bitops.h
> @@ -226,7 +226,7 @@ static __always_inline int variable_fls(unsigned int x)
> * @nr: Bit to set
> * @addr: Address to count from
> *
> - * This operation may be reordered on other architectures than x86.
> + * This is an atomic fully-ordered operation (implied full memory barrier).
> */
> static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long *addr)
> {
> @@ -238,7 +238,7 @@ static __always_inline int arch_test_and_set_bit(int nr, volatile unsigned long
> * @nr: Bit to clear
> * @addr: Address to count from
> *
> - * This operation can be reordered on other architectures other than x86.
> + * This is an atomic fully-ordered operation (implied full memory barrier).
> */
> static __always_inline int arch_test_and_clear_bit(int nr, volatile unsigned long *addr)
> {
>
> ---
> base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
> change-id: 20250311-riscv-fix-test-and-set-bit-comment-aa9081a27d61
>
> Best regards,
> --
> Ignacio Encinas <ignacio@...cinas.com>
Powered by blists - more mailing lists