[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dcc6e306-6095-4bbf-a911-d448d6b495d2@linux.ibm.com>
Date: Tue, 14 Oct 2025 15:21:01 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>, Ingo Molnar <mingo@...nel.org>,
Chen Yu <yu.c.chen@...el.com>, Doug Nelson <doug.nelson@...el.com>,
Mohini Narkhede <mohini.narkhede@...el.com>,
linux-kernel@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
K Prateek Nayak <kprateek.nayak@....com>
Subject: Re: [RESEND PATCH] sched/fair: Skip sched_balance_running cmpxchg
when balance is not due
On 10/14/25 3:12 PM, Peter Zijlstra wrote:
> On Tue, Oct 14, 2025 at 03:03:41PM +0530, Shrikanth Hegde wrote:
>
>>> @@ -11758,6 +11775,12 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
>>> goto out_balanced;
>>> }
>>> + if (idle != CPU_NEWLY_IDLE && (sd->flags & SD_SERIALIZE)) {
>>> + if (atomic_cmpxchg_acquire(&sched_balance_running, 0, 1))
>>> + goto out_balanced;
>>
>> Maybe goto out instead of out_balanced ?
>
> That would be inconsistent with the !should_we_balance() goto
> out_balanced right above this, no?
Yes. But whats the reason for saying out_balanced for !should_we_balance?
Load balance wasn't even attempted there right? Could this be updating it wrongly?
At-least comments around out_all_pinned doesn't make sense if we came here via !swb
schedstat_inc(sd->lb_balanced[idle]);
sd->nr_balance_failed = 0;
>
>>> + need_unlock = true;
>>> + }
>>> +
>>> group = sched_balance_find_src_group(&env);
>>> if (!group) {
>>> schedstat_inc(sd->lb_nobusyg[idle]);
>>> @@ -11998,6 +12021,9 @@ static int sched_balance_rq(int this_cpu, struct rq *this_rq,
>>> sd->balance_interval < sd->max_interval)
>>> sd->balance_interval *= 2;
>>> out:
>>> + if (need_unlock)
>>> + atomic_set_release(&sched_balance_running, 0);
>>> +
>>> return ld_moved;
>>> }
>>> @@ -12122,21 +12148,6 @@ static int active_load_balance_cpu_stop(void *data)
>>> return 0;
>>> }
>>> -/*
>>> - * This flag serializes load-balancing passes over large domains
>>> - * (above the NODE topology level) - only one load-balancing instance
>>> - * may run at a time, to reduce overhead on very large systems with
>>> - * lots of CPUs and large NUMA distances.
>>> - *
>>> - * - Note that load-balancing passes triggered while another one
>>> - * is executing are skipped and not re-tried.
>>> - *
>>> - * - Also note that this does not serialize rebalance_domains()
>>> - * execution, as non-SD_SERIALIZE domains will still be
>>> - * load-balanced in parallel.
>>> - */
>>> -static atomic_t sched_balance_running = ATOMIC_INIT(0);
>>> -
>>> /*
>>> * Scale the max sched_balance_rq interval with the number of CPUs in the system.
>>> * This trades load-balance latency on larger machines for less cross talk.
>>> @@ -12192,7 +12203,7 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
>>> /* Earliest time when we have to do rebalance again */
>>> unsigned long next_balance = jiffies + 60*HZ;
>>> int update_next_balance = 0;
>>> - int need_serialize, need_decay = 0;
>>> + int need_decay = 0;
>>> u64 max_cost = 0;
>>> rcu_read_lock();
>>> @@ -12216,13 +12227,6 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
>>> }
>>> interval = get_sd_balance_interval(sd, busy);
>>> -
>>> - need_serialize = sd->flags & SD_SERIALIZE;
>>> - if (need_serialize) {
>>> - if (atomic_cmpxchg_acquire(&sched_balance_running, 0, 1))
>>> - goto out;
>>> - }
>>> -
>>> if (time_after_eq(jiffies, sd->last_balance + interval)) {
>>> if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
>>> /*
>>> @@ -12236,9 +12240,7 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
>>> sd->last_balance = jiffies;
>>> interval = get_sd_balance_interval(sd, busy);
>>> }
>>> - if (need_serialize)
>>> - atomic_set_release(&sched_balance_running, 0);
>>> -out:
>>> +
>>> if (time_after(next_balance, sd->last_balance + interval)) {
>>> next_balance = sd->last_balance + interval;
>>> update_next_balance = 1;
>>
Powered by blists - more mailing lists