[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhddqvNzc5Hz7Ogj@lakrids>
Date: Thu, 24 Feb 2022 10:27:54 +0000
From: Mark Rutland <mark.rutland@....com>
To: Junru Shen <hhusjrsjr@...il.com>
Cc: Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] atomic: Put the fetching of the old value into the loop
when doing atomic CAS
On Thu, Feb 24, 2022 at 04:24:38PM +0800, Junru Shen wrote:
> Put the acquisition of the expected value inside the loop to prevent
> an infinite loop when it does not match.
I suspect you've found this by inspection, as I don't beleive this can
happen. See below.
> Signed-off-by: Junru Shen <hhusjrsjr@...il.com>
> ---
> arch/x86/include/asm/atomic64_64.h | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
> index 7886d0578..3df04c44c 100644
> --- a/arch/x86/include/asm/atomic64_64.h
> +++ b/arch/x86/include/asm/atomic64_64.h
> @@ -207,9 +207,10 @@ static inline void arch_atomic64_and(s64 i, atomic64_t *v)
>
> static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
> {
> - s64 val = arch_atomic64_read(v);
> + s64 val;
>
> do {
> + val = arch_atomic64_read(v);
> } while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
^^^^
See this bit above? ----------------------------'
If arch_atomic64_try_cmpxchg() fails, it writes the value in memory back to
this address, so it has already done the equivalent of arch_atomic64_read(v).
If you're seing this go wrong, it implies that arch_atomic64_try_cmpxchg() is
being mis-compiled, so please provide an example and the disassembly.
Likewise for the other instances below.
Thanks,
Mark.
> return val;
> }
> @@ -225,9 +226,10 @@ static inline void arch_atomic64_or(s64 i, atomic64_t *v)
>
> static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
> {
> - s64 val = arch_atomic64_read(v);
> + s64 val;
>
> do {
> + val = arch_atomic64_read(v);
> } while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
> return val;
> }
> @@ -243,9 +245,10 @@ static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
>
> static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
> {
> - s64 val = arch_atomic64_read(v);
> + s64 val;
>
> do {
> + val = arch_atomic64_read(v);
> } while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
> return val;
> }
> --
> 2.30.2
>
Powered by blists - more mailing lists