[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251114094901.GH3245006@noisy.programming.kicks-ass.net>
Date: Fri, 14 Nov 2025 10:49:01 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>
Cc: vincent.guittot@...aro.org, mingo@...hat.com, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
Chris Mason <clm@...a.com>,
Joseph Salisbury <joseph.salisbury@...cle.com>,
Adam Li <adamli@...amperecomputing.com>,
Hazem Mohamed Abuelfotoh <abuehaze@...zon.com>,
Josh Don <joshdon@...gle.com>
Subject: Re: [PATCH 2/4] sched/fair: Small cleanup to sched_balance_newidle()
On Wed, Nov 12, 2025 at 08:58:23PM +0530, Shrikanth Hegde wrote:
> > @@ -12865,6 +12869,8 @@ static int sched_balance_newidle(struct
> > if (!cpu_active(this_cpu))
> > return 0;
> > + __sched_balance_update_blocked_averages(this_rq);
> > +
>
> is this done only when sd == null ?
Its done always.
> > /*
> > * This is OK, because current is on_cpu, which avoids it being picked
> > * for load-balance and preemption/IRQs are still disabled avoiding
> > @@ -12891,7 +12897,6 @@ static int sched_balance_newidle(struct
> > raw_spin_rq_unlock(this_rq);
> > t0 = sched_clock_cpu(this_cpu);
> > - sched_balance_update_blocked_averages(this_cpu);
> > rcu_read_lock();
> > for_each_domain(this_cpu, sd) {
>
> Referring to commit,
> 9d783c8dd112a (sched/fair: Skip update_blocked_averages if we are defering load balance)
> I think vincent added the max_newidle_lb_cost check since sched_balance_update_blocked_averages is costly.
That seems to suggest we only should do
sched_balance_update_blocked_averages() when we're going to do
balancing and so skipping when !sd is fine.
Powered by blists - more mailing lists