lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD6TpaJoz37Xv2_1Cc8ij_XGFjDTwA+TvN3ddiASkYc4g@mail.gmail.com>
Date:   Tue, 7 Jun 2022 08:57:42 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Vincent Donnefort <vdonnefort@...gle.com>
Cc:     Dietmar Eggemann <dietmar.eggemann@....com>, peterz@...radead.org,
        mingo@...hat.com, linux-kernel@...r.kernel.org,
        morten.rasmussen@....com, chris.redpath@....com,
        qperret@...gle.com, tao.zhou@...ux.dev, kernel-team@...roid.com
Subject: Re: [PATCH v9 2/7] sched/fair: Decay task PELT values during wakeup migration

On Mon, 6 Jun 2022 at 11:31, Vincent Donnefort <vdonnefort@...gle.com> wrote:
>
> [...]
> > > @@ -8114,6 +8212,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> > >             if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> > >                     update_tg_load_avg(cfs_rq);
> > >
> > > +                   /* sync clock_pelt_idle with last update */
> >
> > update_idle_cfs_rq_clock_pelt() syncs cfs_rq->throttled_pelt_idle with
> > cfs_rq->throttled_clock_pelt_time. Not sure what `clock_pelt_idle` and
> > `last update` here mean?
>
>
> Indeed, this comment is not helpful at all. What matters here is that the cfs_rq
> is idle and we need to update the throttled_pelt_idle accordingly.
>
> >
> > [...]
> >
> > > +/* The rq is idle, we can sync to clock_task */
> > > +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> > > +{
> > > +   rq->clock_pelt  = rq_clock_task(rq);
> > > +
> > > +   u64_u32_store(rq->enter_idle, rq_clock(rq));
> > > +   /* Paired with smp_rmb in migrate_se_pelt_lag */
> >
> > minor:
> >
> > s/migrate_se_pelt_lag/migrate_se_pelt_lag()
> >
> > [...]
> >
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index bf4a0ec98678..97bc26e5c8af 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -648,6 +648,10 @@ struct cfs_rq {
> > >     int                     runtime_enabled;
> > >     s64                     runtime_remaining;
> > >
> > > +   u64                     throttled_pelt_idle;
> > > +#ifndef CONFIG_64BIT
> > > +   u64                     throttled_pelt_idle_copy;
> > > +#endif
> > >     u64                     throttled_clock;
> > >     u64                     throttled_clock_pelt;
> > >     u64                     throttled_clock_pelt_time;
> > > @@ -1020,6 +1024,12 @@ struct rq {
> > >     u64                     clock_task ____cacheline_aligned;
> > >     u64                     clock_pelt;
> > >     unsigned long           lost_idle_time;
> > > +   u64                     clock_pelt_idle;
> > > +   u64                     enter_idle;
> > > +#ifndef CONFIG_64BIT
> > > +   u64                     clock_pelt_idle_copy;
> > > +   u64                     enter_idle_copy;
> > > +#endif
> > >
> > >     atomic_t                nr_iowait;
> >
> > `throttled_pelt_idle`, `clock_pelt_idle` and `enter_idle` are clock
> > snapshots when cfs_rq resp. rq go idle. But the naming does not really
> > show this relation. And this makes reading those equations rather difficult.
> >
> > What about something like `throttled_clock_pelt_time_enter_idle`,
> > `clock_pelt_enter_idle`, `clock_enter_idle`? Especially the first one is
> > too long but something which shows that those are clock snapshots when
> > enter idle would IMHO augment readability in migrate_se_pelt_lag().
>
> What if I drop the "enter"?
>
>  clock_idle;
>  clock_pelt_idle;
>  throttled_clock_pelt_time_idle;

and you can even remove the _time for throttled_clock_pelt_idle

>
> >
> > Besides these small issues:
> >
> > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
>
> Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ