[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANpmjNMMueMWBKXEyu8Bj69Mdj3CAgnu8DiSW2PY8MWmUap=Vg@mail.gmail.com>
Date: Fri, 30 Jan 2026 13:04:54 +0100
From: Marco Elver <elver@...gle.com>
To: David Laight <david.laight.linux@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Will Deacon <will@...nel.org>,
Ingo Molnar <mingo@...nel.org>, Thomas Gleixner <tglx@...utronix.de>, Boqun Feng <boqun.feng@...il.com>,
Waiman Long <longman@...hat.com>, Bart Van Assche <bvanassche@....org>, llvm@...ts.linux.dev,
Catalin Marinas <catalin.marinas@....com>, Arnd Bergmann <arnd@...db.de>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kernel test robot <lkp@...el.com>
Subject: Re: [PATCH v2 3/3] arm64, compiler-context-analysis: Permit alias
analysis through __READ_ONCE() with CONFIG_LTO=y
On Thu, 29 Jan 2026 at 12:31, David Laight <david.laight.linux@...il.com> wrote:
>
> On Thu, 29 Jan 2026 01:52:34 +0100
> Marco Elver <elver@...gle.com> wrote:
>
> > When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on
> > kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we
> > could see:
> >
> > | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis]
> > | 982 | }
> > | | ^
> > | kernel/futex/core.c:976:2: note: spinlock acquired here
> > | 976 | spin_lock(lock_ptr);
> > | | ^
> > | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis]
> > | 982 | }
> > | | ^
> > | kernel/futex/core.c:966:6: note: spinlock acquired here
> > | 966 | void futex_q_lockptr_lock(struct futex_q *q)
> > | | ^
> > | 2 warnings generated.
> >
> > Where we have:
> >
> > extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
> > ..
> > void futex_q_lockptr_lock(struct futex_q *q)
> > {
> > spinlock_t *lock_ptr;
> >
> > /*
> > * See futex_unqueue() why lock_ptr can change.
> > */
> > guard(rcu)();
> > retry:
> > >> lock_ptr = READ_ONCE(q->lock_ptr);
> > spin_lock(lock_ptr);
> > ...
> > }
> >
> > The READ_ONCE() above is expanded to arm64's LTO __READ_ONCE(). Here,
> > Clang Thread Safety Analysis's alias analysis resolves 'lock_ptr' to
> > 'atomic ? __u.__val : q->lock_ptr',
>
> Doesn't the previous patch remove that conditional?
> This description should really refer to the code before this patch.
Will word-smith this a bit. But this refers to the state of where the
original issue was found that spawned all this.
> > and considers this the identity of
> > the context lock given it can't see through the inline assembly;
> > however, we simply want 'q->lock_ptr' as the canonical context lock.
> > While for code generation the compiler simplified to __u.__val for
> > pointers (8 byte case -> atomic), TSA's analysis (a) happens much
> > earlier on the AST, and (b) would be the wrong deduction.
> >
> > Now that we've gotten rid of the 'atomic' ternary comparison, we can
> > return '__u.__val' through a pointer that we initialize with '&x', but
> > then change with a pointer-to-pointer. When READ_ONCE()'ing a context
> > lock pointer, TSA's alias analysis does not invalidate the initial alias
> > when updated through the pointer-to-pointer, and we make it effectively
> > "see through" the __READ_ONCE().
>
> Some of that need to be a comment in the code.
> I also suspect you've just found a bug in the TSA logic.
Adding a comment. From a soundness POV, yes it's a bug, but I think
reassigning a pointer via a pointer-to-pointer in the same scope is
just pointless, so I'm willing to keep this as a deliberate escape
hatch (might need to add a test to Clang to capture this and discuss
if someone wants to change).
Powered by blists - more mailing lists