[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170501090026.kht6tsttn6dirbrw@hirez.programming.kicks-ass.net>
Date: Mon, 1 May 2017 11:00:26 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
dietmar.eggemann@....com, Morten.Rasmussen@....com,
yuyang.du@...el.com, pjt@...gle.com, bsegall@...gle.com
Subject: Re: [PATCH v2] sched/fair: update scale invariance of PELT
On Sat, Apr 29, 2017 at 12:09:24AM +0200, Peter Zijlstra wrote:
> On Mon, Apr 10, 2017 at 11:18:29AM +0200, Vincent Guittot wrote:
> > +++ b/include/linux/sched.h
> > @@ -313,6 +313,7 @@ struct load_weight {
> > */
> > struct sched_avg {
> > u64 last_update_time;
> > + u64 stolen_idle_time;
> > u64 load_sum;
> > u32 util_sum;
> > u32 period_contrib;
>
> > + if (sa->util_sum < (LOAD_AVG_MAX * 1000)) {
> > + /*
> > + * Add the idle time stolen by running at lower compute
> > + * capacity
> > + */
> > + delta += sa->stolen_idle_time;
> > + }
> > + sa->stolen_idle_time = 0;
>
>
> So I was wondering if stolen_idle_time really needs to be a u64. Afaict
> we'll be at LOAD_AVG_MAX after LOAD_AVG_MAX_N periods, or LOAD_AVG_MAX_N
> * LOAD_AVG_PERIOD time, which ends up being 11040.
* 1024 or course, but still easily fits in u32.
Powered by blists - more mailing lists