[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160606193848.GF8105@intel.com>
Date: Tue, 7 Jun 2016 03:38:48 +0800
From: Yuyang Du <yuyang.du@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org, bsegall@...gle.com,
pjt@...gle.com, morten.rasmussen@....com,
vincent.guittot@...aro.org, dietmar.eggemann@....com
Subject: Re: [PATCH v4 2/5] sched/fair: Fix attaching task sched avgs twice
when switching to fair or changing task group
On Mon, Jun 06, 2016 at 11:54:50AM +0200, Peter Zijlstra wrote:
> > The task groups are changed like this:
> >
> > if (queued)
> > dequeue_task()
> > task_move_group()
> > if (queued)
> > enqueue_task()
> >
> > Unlike the switch to fair class case, if the task is on_rq, it will be
> > enqueued after we move task groups, so the simplest solution is to reset
> > the task's last_update_time when we do task_move_group(), but not to
> > attach sched avgs in task_move_group(), and then let enqueue_task() do
> > the sched avgs attachment.
>
> So this patch completely removes the detach->attach aging you moved
> around in the previous patch -- leading me to wonder what the purpose of
> the previous patch was.
Basically, they address different issues, and should not be conflated.
> Also, this Changelog completely fails to mention this fact, nor does it
> explain why this is 'right'.
I should have explained this in the changelog. It is "right", because when a
task switches to fair, it is most likely "we could have just aged the entire
load away" as the XXX comment said. But despite that, basically we have a lost
record time, aging or not aging by that time, it doesn't occur to me one is
definitely better than the other. I will make a comment in the code explaining
this. You think?
> > +/* Virtually synchronize task with its cfs_rq */
>
> I don't feel this comment actually enlightens the function much.
Synchronize without update, so virtually, :)
> > @@ -8372,9 +8363,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
> > se->depth = se->parent ? se->parent->depth + 1 : 0;
> > #endif
> >
> > - /* Synchronize task with its cfs_rq */
> > - attach_entity_load_avg(cfs_rq, se);
> > -
> > if (!vruntime_normalized(p))
> > se->vruntime += cfs_rq->min_vruntime;
> > }
>
> You leave attach/detach asymmetric and not a comment in sight explaining
> why.
So it is asymmetric because we uniformly attach in enqueue. I will explain.
This also relates to the following code comments and your comments.
> > @@ -8382,16 +8370,18 @@ static void attach_task_cfs_rq(struct task_struct *p)
> > static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > {
> > detach_task_cfs_rq(p);
> > + reset_task_last_update_time(p);
> > + /*
> > + * If we change back to fair class, we will attach the sched
> > + * avgs when we are enqueued, which will be done only once. We
> > + * won't have the chance to consistently age the avgs before
> > + * attaching them, so we have to continue with the last updated
> > + * sched avgs when we were detached.
> > + */
>
> This comment needs improvement; it confuses.
>
> > @@ -8444,6 +8434,11 @@ static void task_move_group_fair(struct task_struct *p)
> > detach_task_cfs_rq(p);
> > set_task_rq(p, task_cpu(p));
> > attach_task_cfs_rq(p);
> > + /*
> > + * This assures we will attach the sched avgs when we are enqueued,
>
> "ensures" ? Also, more confusion.
>
> > + * which will be done only once.
> > + */
> > + reset_task_last_update_time(p);
> > }
>
Powered by blists - more mailing lists