[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAVODuDsvFsGpdnxQjGffOa=WKBEcnqo-vGkmaMs=UcAQ@mail.gmail.com>
Date: Tue, 23 Oct 2018 14:15:46 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Thara Gopinath <thara.gopinath@...aro.org>
Subject: Re: [PATCH v4 2/2] sched/fair: update scale invariance of PELT
On Tue, 23 Oct 2018 at 12:01, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> > In order to achieve this time scaling, a new clock_pelt is created per rq.
>
>
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 3990818..d987f50 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -848,6 +848,8 @@ struct rq {
> > unsigned int clock_update_flags;
> > u64 clock;
> > u64 clock_task;
> > + u64 clock_pelt;
> > + unsigned long lost_idle_time;
>
> Very clever that. Seems to work out nicely. We should maybe look at
Thanks
> ensuring all these clock fields are indeed on the same cacheline.
yes good point
Powered by blists - more mailing lists