[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCFop-mxb206HkFva4vOEWpc4r-DVfEkVMKPZf8C1V-eA@mail.gmail.com>
Date: Tue, 30 Oct 2018 11:50:05 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: pkondeti@...eaurora.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Thara Gopinath <thara.gopinath@...aro.org>
Subject: Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT
Hi Pavan,
On Tue, 30 Oct 2018 at 10:19, Pavan Kondeti <pkondeti@...eaurora.org> wrote:
>
> Hi Vincent,
>
> On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 6806c27..7a69673 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > return calc_delta_fair(sched_slice(cfs_rq, se), se);
> > }
> >
> > -#ifdef CONFIG_SMP
> > #include "pelt.h"
> > -#include "sched-pelt.h"
> > +#ifdef CONFIG_SMP
> >
> > static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
> > static unsigned long task_h_load(struct task_struct *p);
> > @@ -764,7 +763,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
> > * such that the next switched_to_fair() has the
> > * expected state.
> > */
> > - se->avg.last_update_time = cfs_rq_clock_task(cfs_rq);
> > + se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
> > return;
> > }
> > }
> > @@ -3466,7 +3465,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
> > /* Update task and its cfs_rq load average */
> > static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > {
> > - u64 now = cfs_rq_clock_task(cfs_rq);
> > + u64 now = cfs_rq_clock_pelt(cfs_rq);
> > struct rq *rq = rq_of(cfs_rq);
> > int cpu = cpu_of(rq);
> > int decayed;
> > @@ -6694,6 +6693,12 @@ done: __maybe_unused;
> > if (new_tasks > 0)
> > goto again;
> >
> > + /*
> > + * rq is about to be idle, check if we need to update the
> > + * lost_idle_time of clock_pelt
> > + */
> > + update_idle_rq_clock_pelt(rq);
> > +
> > return NULL;
> > }
>
> Do you think it is better to call this from pick_next_task_idle()? I don't see
> any functional difference, but it may be easier to follow.
Yes there is no functional difference. I have put it there just for
simplicity as there is no pelt related code in idle.c and keep things
contained
>
> Thanks,
> Pavan
> --
> Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
>
Powered by blists - more mailing lists