[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBh5tFgSLfTtKafUMjtHMrFLKj1Ep9YwNn5hZS=p756aQ@mail.gmail.com>
Date: Fri, 21 Dec 2018 15:49:22 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Morten Rasmussen <Morten.Rasmussen@....com>
Subject: Re: [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval
On Thu, 20 Dec 2018 at 18:24, Valentin Schneider
<valentin.schneider@....com> wrote:
>
> On 20/12/2018 14:50, Vincent Guittot wrote:
> [...]
> >> So now we reset the interval for all active balances (expect last active
> >> balance case), even when it is done as a last resort because all other
> >> tasks were pinned.
> >>
> >> Arguably the current code isn't much better (always increase the interval
> >> when active balancing), but at least it covers this case. It would be a
> >> waste not to take this into account when we can detect this scenario
> >
> > So i'd like to remind the $subject of this patchset: fix some known
> > issues for asym_packing.
> > While looking at this, we have added few other voluntary active
> > balances because it was "obvious" that this active migration were
> > voluntary one. But in fact, we don't have any UC which show a problem
> > for those additional UC so far.
> >
> > The default behavior for active migration is to increase the interval
> > Now you want to extend the exception to others active migration UC
> > whereas it's not clear that we don't fall in the default active
> > migration UC and we should not increase the interval.
>
> Well if we stick to the rule of only increasing balance_interval when
> pinned tasks get in the way, it's clear to me that this use case shouldn't
> be segregated from the others.
>
> I do agree however that it's not entirely clear if that balance_interval
> increase was also intended to slow down the nr_balance_failed migrations.
>
> I had a look at the history and stumbled on:
>
> 8102679447da ("[PATCH] sched: improve load balancing pinned tasks")
>
> Which explains the reasoning behind the active_balance balance_interval
> increase:
>
> """
> this one attempts to work out whether the balancing failure has
> been due to too many tasks pinned on the runqueue. This allows it
> to be basically invisible to the regular blancing paths (ie. when
> there are no pinned tasks).
> """
>
> So AFAICT that is indeed the rule we should be following, and I would only
> increase the balance_interval when there are pinned tasks, not because
> of active_balance categories. So here it's a matter of fixing that
> condition into what it was meant to be doing.
After looking at shed.c at this sha1, (sd->nr_balance_failed >
sd->cache_nice_tries+2) was the only condition for doing active
migration and as a result it was the only reason for doubling
sd->balance_interval.
My patch keeps exactly the same behavior for this condition
'sd->nr_balance_failed > sd->cache_nice_tries+2). And, I'm even more
convinced to exclude (sd->nr_balance_failed > sd->cache_nice_tries+2)
condition because it's the condition that has introduced the doubling
of the interval.
As said previously, you can argue that this behavior is not optimal
and discuss its validity, but the sha1 that you mentioned above,
introduced the current policy for (sd->nr_balance_failed >
sd->cache_nice_tries+2) condition.
Reverting such behavior would need more studies, tests and cares which
are out of the scope of this patchset and more related to a whole
refactoring of load_balance and calculte_imbalance; FYI, I have
submitted a topic on the subject for the next OSPM
>
> > What you want is changing the behavior of the scheduler for UC that
> > haven't raised any problem where asym_packing has known problem/
> >
> > Changing default behavior for active migration is not subject of this
> > patchset and should be treated in another one like the one discussed
> > with peter few days ago
> > >> (I'll reiterate my LBF_ALL_PINNED suggestion).
> >>
> >>> /* We were unbalanced, so reset the balancing interval */
> >>> sd->balance_interval = sd->min_interval;
> >>> } else {
> >>>
Powered by blists - more mailing lists