[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <273b9b52-8c00-0414-ea11-214d81cd57c7@arm.com>
Date: Wed, 29 Aug 2018 11:54:58 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Peter Zijlstra <peterz@...radead.org>,
Steve Muckle <smuckle@...gle.com>
Cc: Miguel de Dios <migueldedios@...gle.com>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
kernel-team@...roid.com, Todd Kjos <tkjos@...gle.com>,
Paul Turner <pjt@...gle.com>,
Quentin Perret <quentin.perret@....com>,
Patrick Bellasi <Patrick.Bellasi@....com>,
Chris Redpath <Chris.Redpath@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
John Dias <joaodias@...gle.com>
Subject: Re: [PATCH] sched/fair: vruntime should normalize when switching from
fair
On 08/28/2018 03:53 PM, Dietmar Eggemann wrote:
> On 08/27/2018 12:14 PM, Peter Zijlstra wrote:
>> On Fri, Aug 24, 2018 at 02:24:48PM -0700, Steve Muckle wrote:
>>> On 08/24/2018 02:47 AM, Peter Zijlstra wrote:
>>>>>> On 08/17/2018 11:27 AM, Steve Muckle wrote:
>>>>
>>>>>>> When rt_mutex_setprio changes a task's scheduling class to RT,
>>>>>>> we're seeing cases where the task's vruntime is not updated
>>>>>>> correctly upon return to the fair class.
>>>>
>>>>>>> Specifically, the following is being observed:
>>>>>>> - task is deactivated while still in the fair class
>>>>>>> - task is boosted to RT via rt_mutex_setprio, which changes
>>>>>>> the task to RT and calls check_class_changed.
>>>>>>> - check_class_changed leads to detach_task_cfs_rq, at which point
>>>>>>> the vruntime_normalized check sees that the task's state is TASK_WAKING,
>>>>>>> which results in skipping the subtraction of the rq's min_vruntime
>>>>>>> from the task's vruntime
>>>>>>> - later, when the prio is deboosted and the task is moved back
>>>>>>> to the fair class, the fair rq's min_vruntime is added to
>>>>>>> the task's vruntime, even though it wasn't subtracted earlier.
>>>>
>>>> I'm thinking that is an incomplete scenario; where do we get to
>>>> TASK_WAKING.
>>>
>>> Yes there's a missing bit of context here at the beginning that the task to
>>> be boosted had already been put into TASK_WAKING.
>>
>> See, I'm confused...
>>
>> The only time TASK_WAKING is visible, is if we've done a remote wakeup
>> and it's 'stuck' on the remote wake_list. And in that case we've done
>> migrate_task_rq_fair() on it.
>>
>> So by the time either rt_mutex_setprio() or __sched_setscheduler() get
>> to calling check_class_changed(), under both pi_lock and rq->lock, the
>> vruntime_normalized() thing should be right.
>>
>> So please detail the exact scenario. Because I'm not seeing it.
>
> Using Steve's test program (https://lkml.org/lkml/2018/8/24/686) I see the
> issue but only if the two tasks (rt_task, fair_task) run on 2 cpus which
> don't share LLC (e.g. CPU0 and CPU4 on hikey960).
>
> So the wakeup goes the TTWU_QUEUE && !share_cache (ttwu_queue_remote) path.
I forgot to mention that since fair_task's cpu affinity is restricted to
CPU4, there is no call to set_task_cpu()->migrate_task_rq_fair() since
if (task_cpu(p) != cpu) fails.
I think the combination of cpu affinity of the fair_task to CPU4 and the
fact that the scheduler runs on CPU1 when waking fair_task (with the two
cpus not sharing LLC) while TTWU_QUEUE is enabled is the situation in
which this vruntime issue can happen.
[...]
Powered by blists - more mailing lists