[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXDPkGmsSKuhFyOS@elver.google.com>
Date: Wed, 21 Jan 2026 14:07:28 +0100
From: Marco Elver <elver@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, bigeasy@...utronix.de, mingo@...nel.org,
tglx@...utronix.de, will@...nel.org, boqun.feng@...il.com,
longman@...hat.com, hch@....de, rostedt@...dmis.org,
bvanassche@....org, llvm@...ts.linux.dev
Subject: Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
On Wed, Jan 21, 2026 at 12:07PM +0100, Peter Zijlstra wrote:
> Hai
>
> This is on top of tip/locking/core with these patches on:
>
> https://lkml.kernel.org/r/20260119094029.1344361-1-elver@google.com
>
> and converts mutex, rtmutex, ww_mutex and futex to use the new context analysis
> bits.
>
> There is one snafu:
>
> ww_mutex_set_context_fastpath()'s data_race() usage doesn't stop the compiler
> from complaining when building a defconfig+PREEMPT_RT+LOCKDEP build:
>
> ../kernel/locking/ww_mutex.h:439:24: error: calling function '__ww_mutex_has_waiters' requires holding raw_spinlock 'lock->base.rtmutex.wait_lock' exclusively [-Werror,-Wthread-safety-analysis]
> 439 | if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
> | ^
> 1 error generated.
This works:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 73eed6b7f24e..561e2475954d 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -436,7 +436,8 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
+ bool has_waiters = data_race(__ww_mutex_has_waiters(&lock->base));
+ if (likely(!has_waiters))
return;
/*
It appears that the _Pragma are ignored when the expression is inside
__builtin_expect(...). That's a bit inconvenient.
Another option is this given its exclusively used without holding this
lock:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 73eed6b7f24e..45a9c394fe91 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -71,7 +71,7 @@ __ww_mutex_owner(struct mutex *lock)
}
static inline bool
-__ww_mutex_has_waiters(struct mutex *lock)
+__ww_mutex_has_waiters_lockless(struct mutex *lock)
{
return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS;
}
@@ -151,10 +151,9 @@ __ww_mutex_owner(struct rt_mutex *lock)
}
static inline bool
-__ww_mutex_has_waiters(struct rt_mutex *lock)
- __must_hold(&lock->rtmutex.wait_lock)
+__ww_mutex_has_waiters_lockless(struct rt_mutex *lock)
{
- return rt_mutex_has_waiters(&lock->rtmutex);
+ return data_race(rt_mutex_has_waiters(&lock->rtmutex));
}
static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
@@ -436,7 +435,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
+ if (likely(!__ww_mutex_has_waiters_lockless(&lock->base)))
return;
/*
Powered by blists - more mailing lists