[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180622141159.GN2494@hirez.programming.kicks-ass.net>
Date: Fri, 22 Jun 2018 16:11:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Quentin Perret <quentin.perret@....com>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
viresh kumar <viresh.kumar@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Patrick Bellasi <patrick.bellasi@....com>,
Joel Fernandes <joel@...lfernandes.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH v6 04/11] cpufreq/schedutil: use rt utilization tracking
On Fri, Jun 22, 2018 at 03:54:24PM +0200, Vincent Guittot wrote:
> On Fri, 22 Jun 2018 at 15:26, Peter Zijlstra <peterz@...radead.org> wrote:
> > $ bc -l
> > define f (u,r,n) { return u + ((u/(1-r)) - u) * (u/(1-r))^n; }
> > f(.2,.7,0)
> > .66666666666666666666
> > f(.2,.7,2)
> > .40740740740740740739
> > f(.2,.7,4)
> > .29218106995884773661
> >
> > So at 10% idle time, we've only inflated what should be 20% to 40%, that
> > is entirely reasonable I think. The linear case gave us 66%. But feel
> > free to increase @n if you feel that helps, 4 is only one mult more than
> > 2 and gets us down to 29%.
>
> I'm a bit lost with your example.
> u = 0.2 (for cfs) and r=0.7 (let say for rt) in your example and idle is 0.1
>
> For rt task, we run 0.7 of the time at f=1 then we will select f=0.4
> for run cfs task with u=0.2 but u is the utilization at f=1 which
> means that it will take 250% of normal time to execute at f=0.4 which
> means 0.5 time instead of 0.2 at f=1 so we are going out of time. In
> order to have enough time to run r and u we must run at least f=0.666
> for cfs = 0.2/(1-0.7).
Argh.. that is n=0. So clearly I went off the rails somewhere.
Powered by blists - more mailing lists