[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANDhNCoQZaW3g7wyMnjE9gdC64tSYAnyTyP9zRoSAAJmM31+HQ@mail.gmail.com>
Date: Fri, 19 Sep 2025 11:34:21 -0700
From: John Stultz <jstultz@...gle.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: LKML <linux-kernel@...r.kernel.org>, Joel Fernandes <joelagnelf@...dia.com>,
Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>, Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>, Metin Kaya <Metin.Kaya@....com>,
Xuewen Yan <xuewen.yan94@...il.com>, Thomas Gleixner <tglx@...utronix.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>, Suleiman Souhlal <suleiman@...gle.com>,
kuyo chang <kuyo.chang@...iatek.com>, hupu <hupu.gm@...il.com>, kernel-team@...roid.com
Subject: Re: [RESEND][PATCH v21 3/6] sched: Add logic to zap balance callbacks
if we pick again
On Mon, Sep 15, 2025 at 1:32 AM K Prateek Nayak <kprateek.nayak@....com> wrote:
> On 9/4/2025 5:51 AM, John Stultz wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e0007660161fa..01bf5ef8d9fcc 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5001,6 +5001,40 @@ static inline void finish_task(struct task_struct *prev)
> > smp_store_release(&prev->on_cpu, 0);
> > }
> >
> > +#if defined(CONFIG_SCHED_PROXY_EXEC)
>
> nit. This can be an "#ifdef CONFIG_SCHED_PROXY_EXEC" now.
Ah. Yes, this is leftover from it previously checking for PROXY_EXEC
&& CONFIG_SMP. I'll be sure to clean that up.
> > +#else
> > +static inline void zap_balance_callbacks(struct rq *rq)
> > +{
> > +}
>
> nit.
>
> This can perhaps be reduced to a single line in light of Thomas' recent
> work to condense the stubs elsewhere:
> https://lore.kernel.org/lkml/20250908212925.389031537@linutronix.de/
Ah, if folks are ok with that, I'd prefer it as well! Thanks for the
suggestion! I'll try to work that throughout the series.
> > +#endif
> > +
> > static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
> > {
> > void (*func)(struct rq *rq);
> > @@ -6941,8 +6975,11 @@ static void __sched notrace __schedule(int sched_mode)
> > rq_set_donor(rq, next);
> > if (unlikely(task_is_blocked(next))) {
> > next = find_proxy_task(rq, next, &rf);
> > - if (!next)
> > + if (!next) {
> > + /* zap the balance_callbacks before picking again */
> > + zap_balance_callbacks(rq);
> > goto pick_again;
> > + }
> > if (next == rq->idle)
> > goto keep_resched;
>
> Should we zap the callbacks if we are planning to go through schedule()
> again via rq->idle since it essentially voids the last pick too?
Hrm. So I don't think it's strictly necessary, because we will run the
set callback as part of finish_task_switch() when we switch briefly to
idle. So we don't end up with stale callbacks in the next
pick_next_task().
But I guess zapping them could help just avoid running it spuriously.
I'll give that a shot and see how it affects things.
Thanks again for all the suggestions!
-john
Powered by blists - more mailing lists