[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCh8COzVy5-f_vKOBn7CrkARXLxxLofX9X+C8r0GKZMLA@mail.gmail.com>
Date: Mon, 30 May 2016 17:54:47 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Yuyang Du <yuyang.du@...el.com>
Subject: Re: [PATCH v2] sched: fix first task of a task group is attached twice
On 27 May 2016 at 22:38, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
> On 27/05/16 18:16, Vincent Guittot wrote:
>> On 27 May 2016 at 17:48, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>>> On 25/05/16 16:01, Vincent Guittot wrote:
>>>> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect
>>>> that the 1st sched_entity that will be attached, will keep its
>>>> last_update_time set to 0 and will attached once again during the
>>>> enqueue.
>>>> Initialize cfs_rq->avg.last_update_time to 1 instead.
>>>>
>>>> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
>>>> ---
>>>>
>>>> v2:
>>>> - rq_clock_task(rq_of(cfs_rq)) can't be used because lock is not held
>>>>
>>>> kernel/sched/fair.c | 8 ++++++++
>>>> 1 file changed, 8 insertions(+)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 218f8e8..3724656 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -8586,6 +8586,14 @@ void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq,
>>>> se->depth = parent->depth + 1;
>>>> }
>>>>
>>>> + /*
>>>> + * Set last_update_time to something different from 0 to make
>>>> + * sure the 1st sched_entity will not be attached twice: once
>>>> + * when attaching the task to the group and one more time when
>>>> + * enqueueing the task.
>>>> + */
>>>> + tg->cfs_rq[cpu]->avg.last_update_time = 1;
>>>> +
>
> Couldn't you not just set the value in init_cfs_rq():
>
> @@ -8482,6 +8482,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
> cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime;
> #endif
> #ifdef CONFIG_SMP
> + cfs_rq->avg.last_update_time = 1;
> atomic_long_set(&cfs_rq->removed_load_avg, 0);
> atomic_long_set(&cfs_rq->removed_util_avg, 0);
> #endif
>
>>>> se->my_q = cfs_rq;
>>>> /* guarantee group entities always have weight */
>>>> update_load_set(&se->load, NICE_0_LOAD);
>>>
>>> So why not setting the last_update_time value for those cfs_rq's when
>>> we have the lock? E.g. in task_move_group_fair() or attach_task_cfs_rq().
>>
>> I'm not sure that it's worth adding this init in functions that are
>> then used often only for the init of it.
>
> Yeah, there will be this if condition overhead.
>
>> If you are concerned by the update of the load of the 1st task that
>> will be attached, it can still have elapsed a long time between the
>> creation of the group and the 1st enqueue of a task. This was the case
>> for the test i did when i found this issue.
>
> Understood, but for me, creation of the task group is
> cpu_cgroup_css_alloc -> sched_create_group() -> ... -> init_cfs_rq(),
> init_tg_cfs_entry(), ...
>
> and the functions which are called when the first task is put into the
> task group are cpu_cgroup_attach() and cpu_cgroup_fork() and they whould
> trigger the initial setup of the cfs_rq->avg.last_update_time.
Adding a test and the init of cfs_rq->avg.last_update_time in
cpu_cgroup_attach() and cpu_cgroup_fork() in order to have an almost
up to date cfs_rq->avg.last_update_time at creation, will only solve a
part of a wider issue that happens when moving a task to a cfs_rq that
has not been updated for a while (since the creation for the 1st time
but also since the last update of a blocked cfs_rq)
I have another pending patch for this kind of issue that i haven't sent yet
>
>>
>> Beside this point, I have to send a new version to set
>> load_last_update_time_copy for not 64 bits system. Fengguang points me
>> the issue
>
> OK.
>
> [...]
>>>
>>> + if (!cfs_rq->avg.last_update_time)
>>> + cfs_rq->avg.last_update_time = rq_clock_task(rq_of(cfs_rq));
>>> +
>>> /* Synchronize task with its cfs_rq */
>>> attach_entity_load_avg(cfs_rq, se);
>>
Powered by blists - more mailing lists