[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <738ad52b-d94e-c31a-3d40-56a0aba64453@arm.com>
Date: Wed, 4 Dec 2019 23:22:01 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, linux-kernel@...r.kernel.org
Cc: john.stultz@...aro.org, qais.yousef@....com
Subject: Re: [PATCH] sched/fair: fix find_idlest_group() to handle CPU
affinity
On 04/12/2019 18:21, Vincent Guittot wrote:
> Because of CPU affinity, the local group can be skipped which breaks the
> assumption that statistics are always collected for local group. With
> uninitialized local_sgs, the comparison is meaningless and the behavior
> unpredictable. This can even end up to use local pointer which is to
> NULL in this case.
>
> If the local group has been skipped because of CPU affinity, we return
> the idlest group.
>
I stared at find_idlest_group() before the rework out of curiosity and
AFAICT the "never visit local group" thing was there already. However, we
would only use the load and spare capacity of that group, and the relevant
variables where initialized to ULONG_MAX and 0 respectively. This would lead
us to return 'idlest' (or 'most_spare_sg', but it's the same as 'idlest' now).
So IMO this is just restoring the previous behaviour, which is what we want
methinks.
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
Powered by blists - more mailing lists