[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJWu+ookm9iEupdS+ZLU_1KwcqFMyqV5HAC4mpyCgdkcBy-_Mg@mail.gmail.com>
Date: Mon, 10 Apr 2017 18:57:17 -0700
From: Joel Fernandes <joelaf@...gle.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM <linux-pm@...r.kernel.org>,
Juri Lelli <juri.lelli@....com>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Patrick Bellasi <patrick.bellasi@....com>,
Morten Rasmussen <morten.rasmussen@....com>
Subject: Re: [RFC/RFT][PATCH 2/2] cpufreq: schedutil: Utilization aggregation
On Mon, Apr 10, 2017 at 1:59 PM, Rafael J. Wysocki <rafael@...nel.org> wrote:
[..]
>>> + sg_cpu->util = cfs_util;
>>> + sg_cpu->max = cfs_max;
>>> + }
>>> }
>
>
> Well, that's the idea. :-)
>
> During the discussion at the OSPM-summit we concluded that discarding
> all of the utilization changes between the points at which frequency
> updates actually happened was not a good idea, so they needed to be
> aggregated somehow.
>
> There are a few ways to aggregate them, but the most straightforward
> one (and one which actually makes sense) is to take the maximum as the
> aggregate value.
>
> Of course, this means that we skew things towards performance here,
> but I'm not worried that much. :-)
Does this increase the chance of going to idle at higher frequency?
Say in the last rate limit window, we have a high request followed by
a low request. After the window closes, by this algorithm we ignore
the low request and take the higher valued request, and then enter
idle. Then, wouldn't we be idling at higher frequency? I guess if you
enter "cluster-idle" then probably this isn't a big deal (like on the
ARM64 platforms I am working on). But I wasn't sure how expensive is
entering C-states at higher frequency on Intel platforms is or if it
is even a concern. :-D
Thanks,
Joel
Powered by blists - more mailing lists