[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46c7f966-6679-bb9e-dabe-bb385298d19b@arm.com>
Date: Thu, 16 Jul 2020 18:48:37 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>,
Patrick Bellasi <patrick.bellasi@...bug.net>
Cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Jonathan Corbet <corbet@....net>,
Dhaval Giani <dhaval.giani@...cle.com>,
Josef Bacik <jbacik@...com>,
Chris Hyser <chris.hyser@...cle.com>,
Parth Shah <parth@...ux.ibm.com>
Subject: Re: [SchedulerWakeupLatency] Per-task vruntime wakeup bonus
On 13/07/2020 14:59, Vincent Guittot wrote:
> On Fri, 10 Jul 2020 at 21:59, Patrick Bellasi
> <patrick.bellasi@...bug.net> wrote:
>>
>>
>> On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot <vincent.guittot@...aro.org> wrote...
[...]
>>> Instead, it should weight the decision in wakeup_preempt_entity() and
>>> wakeup_gran()
>>
>> In those functions we already take the task prio into consideration
>> (ref details at the end of this message).
>>
>> Lower nice value tasks have more chances to preempt current since they
>> will have a smaller wakeup_gran, indeed:
>
> yes, and this is there to ensure a fair distribution of running time
> and prevent a task to increase significantly its vruntime compare to
> others
>
> -1 min that se already got more runtime than current
> 0 that se's vruntime will go above current's vruntime after a runtime
> duration of sched_min_granularity
> and 1 means that se got less runtime than current and its vruntime
> will still be lower than current ones even after a runtime duration of
> sched_min_granularity
>
> IMHO, latency_nice should impact the decision only for case 0 but not
> the case -1 and 1.
> That being said, we can extend the case 0 a bit to include the
> situation where current's vruntime will become greater than se's
> vruntimes after a runtime duration of sched_min_granularity like
> below:
>
> curr->vruntime
> |<-- wakeup_gran(se) -->|<--
> wakeupgran(curr) -->|
> current range: se->vruntime +1 | 0 | -1
> new range: se->vruntime +1 | 0
> | -1
>
I assume this got messed up by line break somehow:
curr->vruntime
|<-- wakeup_gran(se) -->|<-- wakeup_gran(curr) -->|
current range: se->vruntime +1 | 0 | -1
new range: se->vruntime +1 | 0 | -1
IMHO, with the current use of wakeup_preempt_entity() I don't see what
will change with that.
We check 'wakeup_preempt_entity() == 1' in check_preempt_wakeup() and
'wakeup_preempt_entity() < 1' in pick_next_entity().
How should the mapping between se's latency_nice value to the consideration of
wakeup_gran(curr) look like?
[...]
Powered by blists - more mailing lists