[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201007052935.GK4352@localhost.localdomain>
Date: Wed, 7 Oct 2020 07:29:35 +0200
From: Juri Lelli <juri.lelli@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Valentin Schneider <valentin.schneider@....com>,
tglx@...utronix.de, mingo@...nel.org, linux-kernel@...r.kernel.org,
bigeasy@...utronix.de, qais.yousef@....com, swood@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vincent.donnefort@....com, tj@...nel.org
Subject: Re: [PATCH -v2 15/17] sched: Fix migrate_disable() vs rt/dl balancing
On 06/10/20 16:48, Peter Zijlstra wrote:
> On Tue, Oct 06, 2020 at 04:37:04PM +0200, Juri Lelli wrote:
> > On 06/10/20 15:48, Peter Zijlstra wrote:
> > > On Tue, Oct 06, 2020 at 12:20:43PM +0100, Valentin Schneider wrote:
> > > >
> > > > On 05/10/20 15:57, Peter Zijlstra wrote:
> > > > > In order to minimize the interference of migrate_disable() on lower
> > > > > priority tasks, which can be deprived of runtime due to being stuck
> > > > > below a higher priority task. Teach the RT/DL balancers to push away
> > > > > these higher priority tasks when a lower priority task gets selected
> > > > > to run on a freshly demoted CPU (pull).
> >
> > Still digesting the whole lot, but can't we "simply" force push the
> > higest prio (that we preempt to make space for the migrate_disabled
> > lower prio) directly to the cpu that would accept the lower prio that
> > cannot move?
> >
> > Asking because AFAIU we are calling find_lock_rq from push_cpu_stop and
> > that selects the best cpu for the high prio. I'm basically wondering if
> > we could avoid moving, potentially multiple, high prio tasks around to
> > make space for a lower prio task.
>
> The intention was to do as you describe in the first paragraph, and
> isn't pull also using find_lock_rq() to select the 'lowest' priority
> runqueue to move the task to?
>
> That is, both actions should end up at the same 'lowest' prio CPU.
>
OK, right. Think there might still be differences, since successive calls
to find_lock_rq() could select different candidates (after the
introduction of cpumask_..._distribute), but that shouldn't really
matter much (also things might have changed in the meanwhile and we
really have to call find_lock_rq again I guess).
Powered by blists - more mailing lists