[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <13c4ca53-61d6-b90d-7344-94f724c0cf15@arm.com>
Date: Mon, 29 Nov 2021 17:56:11 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Valentin Schneider <valentin.schneider@....com>,
Vincent Donnefort <vincent.donnefort@....com>,
peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched/fair: Fix per-CPU kthread and wakee stacking for
asym CPU capacity
On 29.11.21 11:54, Valentin Schneider wrote:
> On 25/11/21 10:12, Vincent Donnefort wrote:
>> select_idle_sibling() has a special case for tasks woken up by a per-CPU
>> kthread. For this case, the chosen CPU is the previous one. This is an
>> issue for asymmetric CPU capacity systems where the wakee might not fit
>> that CPU anymore. Evaluate asym_fits_capacity() for prev_cpu before using
>> the exit path described above.
>>
>> Fixes: b4c9c9f15649 ("sched/fair: Prefer prev cpu in asymmetric wakeup path")
>> Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
>
> Per our discussion on v1, the asym check was intentionally omitted, the
> assumption being: we'd be putting p back on its prev CPU, its utilization
> cannot be bigger now than it was then so it should still pass the capacity
> fitness criterion (unless we dequeued it right before crossing the next
> PELT window boundary would have made it cross the tipping point...)
... and assume ~0 sleep time for p so sync_entity_load_avg() can't decay
p's util_avg.
>
> Uclamp goes against this, p's uclamp.min can completely change between its
> dequeue and wakeup, which warrants adding the check.
>
> I'd like to see (at least some of) the above in the changelog, but
+1 (since otherwise the WHY the wakee could potentially not fit on prev
anymore is hard to grasp).
> pedantism aside:
>> Reviewed-by: Valentin Schneider <Valentin.Schneider@....com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
>
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 6291876a9d32..b90dc6fd86ca 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6410,7 +6410,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>> */
>> if (is_per_cpu_kthread(current) &&
>> prev == smp_processor_id() &&
>> - this_rq()->nr_running <= 1) {
>> + this_rq()->nr_running <= 1 &&
>> + asym_fits_capacity(task_util, prev)) {
>> return prev;
>> }
>>
>> --
>> 2.25.1
Powered by blists - more mailing lists