[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250417111841.GL38216@noisy.programming.kicks-ass.net>
Date: Thu, 17 Apr 2025 13:18:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: John Stultz <jstultz@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Joel Fernandes <joelagnelf@...dia.com>,
Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Metin Kaya <Metin.Kaya@....com>,
Xuewen Yan <xuewen.yan94@...il.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Suleiman Souhlal <suleiman@...gle.com>, kernel-team@...roid.com
Subject: Re: [PATCH v16 5/7] sched: Add an initial sketch of the
find_proxy_task() function
On Fri, Apr 11, 2025 at 11:02:39PM -0700, John Stultz wrote:
> +#ifdef CONFIG_SCHED_PROXY_EXEC
> +static inline struct task_struct *proxy_resched_idle(struct rq *rq)
> +{
> + put_prev_set_next_task(rq, rq->donor, rq->idle);
> + rq_set_donor(rq, rq->idle);
> + set_tsk_need_resched(rq->idle);
> + return rq->idle;
> +}
> +
> +static bool __proxy_deactivate(struct rq *rq, struct task_struct *donor)
> +{
> + unsigned long state = READ_ONCE(donor->__state);
> +
> + /* Don't deactivate if the state has been changed to TASK_RUNNING */
> + if (state == TASK_RUNNING)
> + return false;
> + /*
> + * Because we got donor from pick_next_task, it is *crucial*
pick_next_task()
> + * that we call proxy_resched_idle before we deactivate it.
proxy_resched_idle()
> + * As once we deactivate donor, donor->on_rq is set to zero,
> + * which allows ttwu to immediately try to wake the task on
ttwu()
> + * another rq. So we cannot use *any* references to donor
> + * after that point. So things like cfs_rq->curr or rq->donor
> + * need to be changed from next *before* we deactivate.
> + */
> + proxy_resched_idle(rq);
> + return try_to_block_task(rq, donor, state, true);
> +}
> +
> +static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *donor)
> +{
> + if (!__proxy_deactivate(rq, donor)) {
> + /*
> + * XXX: For now, if deactivation failed, set donor
> + * as unblocked, as we aren't doing proxy-migrations
> + * yet (more logic will be needed then).
> + */
> + donor->blocked_on = NULL;
> + }
> + return NULL;
> +}
> +
> +/*
> + * Initial simple sketch that just deactivates the blocked task
> + * chosen by pick_next_task() so we can then pick something that
> + * isn't blocked.
> + */
> +static struct task_struct *
> +find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> +{
> + struct task_struct *p = donor;
> + struct mutex *mutex;
> +
> + mutex = p->blocked_on;
> + /* Something changed in the chain, so pick again */
> + if (!mutex)
> + return NULL;
> + /*
> + * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
> + * and ensure @owner sticks around.
> + */
> + guard(raw_spinlock)(&mutex->wait_lock);
> +
> + /* Check again that p is blocked with blocked_lock held */
> + if (!task_is_blocked(p) || mutex != __get_task_blocked_on(p)) {
> + /*
> + * Something changed in the blocked_on chain and
> + * we don't know if only at this level. So, let's
> + * just bail out completely and let __schedule
__schedule()
> + * figure things out (pick_again loop).
> + */
> + return NULL; /* do pick_next_task again */
pick_next_task()
> + }
> + return proxy_deactivate(rq, donor);
I was expecting a for() loop here, this only follows blocked_on once,
right?
> +}
Powered by blists - more mailing lists