[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01dcb63e-9ebd-42ca-9418-a822bf081bfc@linux.ibm.com>
Date: Wed, 12 Nov 2025 20:58:23 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>, vincent.guittot@...aro.org
Cc: mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, linux-kernel@...r.kernel.org,
Chris Mason
<clm@...a.com>,
Joseph Salisbury <joseph.salisbury@...cle.com>,
Adam Li <adamli@...amperecomputing.com>,
Hazem Mohamed Abuelfotoh <abuehaze@...zon.com>,
Josh Don <joshdon@...gle.com>
Subject: Re: [PATCH 2/4] sched/fair: Small cleanup to sched_balance_newidle()
On 11/12/25 8:38 PM, Peter Zijlstra wrote:
> On Wed, Nov 12, 2025 at 03:42:41PM +0100, Peter Zijlstra wrote:
>
>>> if sd is null, i think we are skipping these compared to earlier.
>>>
>>> t0 = sched_clock_cpu(this_cpu);
>>> sched_balance_update_blocked_averages(this_cpu);
>>
>> let me pull that sched_balance_update_blocked_averages() thing up a few
>> lines.
>
> Something like so..
>
> ---
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9946,15 +9946,11 @@ static unsigned long task_h_load(struct
> }
> #endif /* !CONFIG_FAIR_GROUP_SCHED */
>
> -static void sched_balance_update_blocked_averages(int cpu)
> +static void __sched_balance_update_blocked_averages(struct rq *rq)
> {
> bool decayed = false, done = true;
> - struct rq *rq = cpu_rq(cpu);
> - struct rq_flags rf;
>
> - rq_lock_irqsave(rq, &rf);
> update_blocked_load_tick(rq);
> - update_rq_clock(rq);
>
> decayed |= __update_blocked_others(rq, &done);
> decayed |= __update_blocked_fair(rq, &done);
> @@ -9962,7 +9958,15 @@ static void sched_balance_update_blocked
> update_blocked_load_status(rq, !done);
> if (decayed)
> cpufreq_update_util(rq, 0);
> - rq_unlock_irqrestore(rq, &rf);
> +}
> +
> +static void sched_balance_update_blocked_averages(int cpu)
> +{
> + struct rq *rq = cpu_rq(cpu);
> +
> + guard(rq_lock_irqsave)(rq);
> + update_rq_clock(rq);
> + __sched_balance_update_blocked_averages(rq);
> }
>
> /********** Helpers for sched_balance_find_src_group ************************/
> @@ -12865,6 +12869,8 @@ static int sched_balance_newidle(struct
> if (!cpu_active(this_cpu))
> return 0;
>
> + __sched_balance_update_blocked_averages(this_rq);
> +
is this done only when sd == null ?
> /*
> * This is OK, because current is on_cpu, which avoids it being picked
> * for load-balance and preemption/IRQs are still disabled avoiding
> @@ -12891,7 +12897,6 @@ static int sched_balance_newidle(struct
> raw_spin_rq_unlock(this_rq);
>
> t0 = sched_clock_cpu(this_cpu);
> - sched_balance_update_blocked_averages(this_cpu);
>
> rcu_read_lock();
> for_each_domain(this_cpu, sd) {
Referring to commit,
9d783c8dd112a (sched/fair: Skip update_blocked_averages if we are defering load balance)
I think vincent added the max_newidle_lb_cost check since sched_balance_update_blocked_averages is costly.
Powered by blists - more mailing lists