[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDbrt1phLaiVd71_gk=RYKJmyYwbNi4_UW9f9WFEzGWCw@mail.gmail.com>
Date: Wed, 13 May 2020 18:02:47 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Tao Zhou <ouwen210@...mail.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel <linux-kernel@...r.kernel.org>,
Phil Auld <pauld@...hat.com>,
Pavan Kondeti <pkondeti@...eaurora.org>
Subject: Re: [PATCH v3] sched/fair: enqueue_task_fair optimization
On Wed, 13 May 2020 at 17:51, Tao Zhou <ouwen210@...mail.com> wrote:
>
> Hi Vincent,
>
> Sorry for the duplicate.
>
> On Wed, May 13, 2020 at 03:55:02PM +0200, Vincent Guittot wrote:
> > enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
> > throttled which means that se can't be NULL in such case and we can move
> > the label after the if (!se) statement. Futhermore, the latter can be
> > removed because se is always NULL when reaching this point.
> >
> > Reviewed-by: Phil Auld <pauld@...hat.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >
> > v3 changes:
> > - updated commit message
> > - removed an extra }
> >
> > kernel/sched/fair.c | 39 +++++++++++++++++++--------------------
> > 1 file changed, 19 insertions(+), 20 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 9a58874ef104..4e586863827b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > list_add_leaf_cfs_rq(cfs_rq);
> > }
> >
> > -enqueue_throttle:
> > - if (!se) {
> > - add_nr_running(rq, 1);
> > - /*
> > - * Since new tasks are assigned an initial util_avg equal to
> > - * half of the spare capacity of their CPU, tiny tasks have the
> > - * ability to cross the overutilized threshold, which will
> > - * result in the load balancer ruining all the task placement
> > - * done by EAS. As a way to mitigate that effect, do not account
> > - * for the first enqueue operation of new tasks during the
> > - * overutilized flag detection.
> > - *
> > - * A better way of solving this problem would be to wait for
> > - * the PELT signals of tasks to converge before taking them
> > - * into account, but that is not straightforward to implement,
> > - * and the following generally works well enough in practice.
> > - */
> > - if (flags & ENQUEUE_WAKEUP)
> > - update_overutilized_status(rq);
> > + /* At this point se is NULL and we are at root level*/
>
> The same as another patch, lack one blank at the end of above comment.
>
> When apply v3 of 'sched/fair: Fix enqueue_task_fair warning some more'
> We need to edit to align to, I don't know why. When I tried to pull some
> thing to a function not done yet.
I forgot to mention that this patch has been done on top of Phil's one
20200512135222.GC2201@...ien.usersys.redhat.com and apply on top of it
>
> Thanks,
> Tao
>
> > + add_nr_running(rq, 1);
> >
> > - }
> > + /*
> > + * Since new tasks are assigned an initial util_avg equal to
> > + * half of the spare capacity of their CPU, tiny tasks have the
> > + * ability to cross the overutilized threshold, which will
> > + * result in the load balancer ruining all the task placement
> > + * done by EAS. As a way to mitigate that effect, do not account
> > + * for the first enqueue operation of new tasks during the
> > + * overutilized flag detection.
> > + *
> > + * A better way of solving this problem would be to wait for
> > + * the PELT signals of tasks to converge before taking them
> > + * into account, but that is not straightforward to implement,
> > + * and the following generally works well enough in practice.
> > + */
> > + if (flags & ENQUEUE_WAKEUP)
> > + update_overutilized_status(rq);
> >
> > +enqueue_throttle:
> > if (cfs_bandwidth_used()) {
> > /*
> > * When bandwidth control is enabled; the cfs_rq_throttled()
> > --
> > 2.17.1
Powered by blists - more mailing lists