[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250708210233.GG477119@noisy.programming.kicks-ass.net>
Date: Tue, 8 Jul 2025 23:02:33 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, clm@...a.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 12/12] sched: Add ttwu_queue support for delayed tasks
On Tue, Jul 08, 2025 at 02:44:56PM +0200, Dietmar Eggemann wrote:
> > + /*
> > + * NOTE: unlike the regular try_to_wake_up() path, this runs both
> > + * select_task_rq() and ttwu_do_migrate() while holding rq->lock
> > + * rather than p->pi_lock.
> > + */
> > + cpu = select_task_rq(p, p->wake_cpu, &wake_flags);
>
> There are 'lockdep_assert_held(&p->pi_lock)'s in select_task_rq() and
> select_task_rq_fair() which should trigger IMHO? Can they be changed the
> same way like __task_rq_lock()?
It needs a slightly different fix; notably the reason for these is the
stability of the cpumasks. For that holding either p->pi_lock or
rq->lock is sufficient.
Something a little like so...
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3557,13 +3557,13 @@ static int select_fallback_rq(int cpu, s
return dest_cpu;
}
-/*
- * The caller (fork, wakeup) owns p->pi_lock, ->cpus_ptr is stable.
- */
static inline
int select_task_rq(struct task_struct *p, int cpu, int *wake_flags)
{
- lockdep_assert_held(&p->pi_lock);
+ /*
+ * Ensure the sched_setaffinity() state is stable.
+ */
+ lockdep_assert_sched_held(p);
if (p->nr_cpus_allowed > 1 && !is_migration_disabled(p)) {
cpu = p->sched_class->select_task_rq(p, cpu, *wake_flags);
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8499,7 +8499,7 @@ select_task_rq_fair(struct task_struct *
/*
* required for stable ->cpus_allowed
*/
- lockdep_assert_held(&p->pi_lock);
+ lockdep_assert_sched_held(p);
if (wake_flags & WF_TTWU) {
record_wakee(p);
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1500,6 +1500,12 @@ static inline void lockdep_assert_rq_hel
lockdep_assert_held(__rq_lockp(rq));
}
+static inline void lockdep_assert_sched_held(struct task_struct *p)
+{
+ lockdep_assert(lockdep_is_held(&p->pi_lock) ||
+ lockdep_is_held(__rq_lockp(task_rq(p))));
+}
+
extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
extern bool raw_spin_rq_trylock(struct rq *rq);
extern void raw_spin_rq_unlock(struct rq *rq);
Powered by blists - more mailing lists