[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180220163352.GD4589@e105550-lin.cambridge.arm.com>
Date: Tue, 20 Feb 2018 16:33:52 +0000
From: Morten Rasmussen <morten.rasmussen@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, valentin.schneider@....com,
dietmar.eggemann@....com, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/7] sched/fair: Avoid unnecessary balancing of
asymmetric capacity groups
On Mon, Feb 19, 2018 at 04:10:11PM +0100, Peter Zijlstra wrote:
> On Thu, Feb 15, 2018 at 04:20:51PM +0000, Morten Rasmussen wrote:
> > +/*
> > + * group_similar_cpu_capacity: Returns true if the minimum capacity of the
> > + * compared groups differ by less than 12.5%.
> > + */
> > +static inline bool
> > +group_similar_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
> > +{
> > + long diff = sg->sgc->min_capacity - ref->sgc->min_capacity;
> > + long max = max(sg->sgc->min_capacity, ref->sgc->min_capacity);
> > +
> > + return abs(diff) < max >> 3;
> > +}
>
> This seems a fairly random and dodgy heuristic.
I can't deny that :-)
We need to somehow figure out if we are doing asymmetric cpu capacity
balancing or normal SMP balancing. We probably don't care about
migrating tasks if the capacities are nearly identical. But how much is
'nearly'?
We could make it strictly equal as long as sgc->min_capacity is based on
capacity_orig. If we let things like rt-pressure influence
sgc->min_capacity, it might become a mess.
We could tie it to sd->imbalance_pct to make it slightly less arbitrary,
or we can try to drop the margin.
Alternative solutions and preferences are welcome...
Powered by blists - more mailing lists