[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d0c556ee-0758-ee57-7264-f1e4c158ae54@arm.com>
Date: Wed, 8 Jan 2020 15:49:55 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Mel Gorman <mgorman@...hsingularity.net>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Hillf Danton <hdanton@...a.com>, Rik van Riel <riel@...riel.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Phil Auld <pauld@...hat.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Parth Shah <parth@...ux.ibm.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small load imbalance between low
utilisation SD_NUMA domains v3
On 07/01/2020 10:16, Mel Gorman wrote:
> I think running tasks at least is the least bad metric. idle CPUs gets
> caught up in corner cases with bindings and util_avg can be skewed by
> outliers. Running tasks is a sensible starting point until there is a
> concrete use case that shows it is unworkable.
I'd tend to agree with you here. Also; this being in the group_has_spare
imbalance type we have some guarantees that the group is not overutilized.
If we keep some threshold of 'sum_nr_running < group_weight / 2', then this
"naturally" puts a hard limit of 50% total group utilization (corner case
where all tasks are 100% util).
> Lets see what you think of
> the other untested patch I posted that takes the group weight and child
> domain weight into account.
>
Powered by blists - more mailing lists