[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ecdbee5-9eb5-a18c-80c4-3473d3f1124c@arm.com>
Date: Tue, 1 Oct 2019 12:48:17 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
peterz@...radead.org, vincent.guittot@...aro.org,
tglx@...utronix.de, qais.yousef@....com
Subject: Re: [PATCH v2 2/4] sched/fair: Move active balance logic to its own
function
On 01/10/2019 12:36, Srikar Dronamraju wrote:
>> +unlock:
>> + raw_spin_unlock_irqrestore(&busiest->lock, flags);
>> +
>> + if (status == started)
>> + stop_one_cpu_nowait(cpu_of(busiest),
>> + active_load_balance_cpu_stop, busiest,
>> + &busiest->active_balance_work);
>> +
>> + /* We've kicked active balancing, force task migration. */
>> + if (status != cancelled_affinity)
>> + sd->nr_balance_failed = sd->cache_nice_tries + 1;
>
> Should we really update nr_balance_failed if status is cancelled?
> I do understand this behaviour was present even before this change. But
> still dont understand why we need to update if the current operation didn't
> kick active_load_balance.
>
Agreed, I kept it as is to keep this as pure a code movement as possible,
but I don't see why this wouldn't be valid wouldn't be valid
(PoV of the current code):
---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1fac444a4831..59f9e3583482 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9023,10 +9023,10 @@ static int load_balance(int this_cpu, struct rq *this_rq,
stop_one_cpu_nowait(cpu_of(busiest),
active_load_balance_cpu_stop, busiest,
&busiest->active_balance_work);
- }
- /* We've kicked active balancing, force task migration. */
- sd->nr_balance_failed = sd->cache_nice_tries+1;
+ /* We've kicked active balancing, force task migration. */
+ sd->nr_balance_failed = sd->cache_nice_tries+1;
+ }
}
} else
sd->nr_balance_failed = 0;
---
Or even better, fold it in active_load_balance_cpu_stop(). I could add that
after the move.
Powered by blists - more mailing lists