[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878rwdse9o.fsf@riseup.net>
Date: Tue, 21 Dec 2021 15:56:51 -0800
From: Francisco Jerez <currojerez@...eup.net>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Julia Lawall <julia.lawall@...ia.fr>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Len Brown <lenb@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: cpufreq: intel_pstate: map utilization into the pstate range
"Rafael J. Wysocki" <rafael@...nel.org> writes:
> On Sun, Dec 19, 2021 at 11:10 PM Francisco Jerez <currojerez@...eup.net> wrote:
>>
>> Julia Lawall <julia.lawall@...ia.fr> writes:
>>
>> > On Sat, 18 Dec 2021, Francisco Jerez wrote:
>
> [cut]
>
>> > I did some experiements with forcing different frequencies. I haven't
>> > finished processing the results, but I notice that as the frequency goes
>> > up, the utilization (specifically the value of
>> > map_util_perf(sg_cpu->util) at the point of the call to
>> > cpufreq_driver_adjust_perf in sugov_update_single_perf) goes up as well.
>> > Is this expected?
>> >
>>
>> Actually, it *is* expected based on our previous hypothesis that these
>> workloads are largely latency-bound: In cases where a given burst of CPU
>> work is not parallelizable with any other tasks the thread needs to
>> complete subsequently, its overall runtime will decrease monotonically
>> with increasing frequency, therefore the number of instructions executed
>> per unit of time will increase monotonically with increasing frequency,
>> and with it its frequency-invariant utilization.
>
> But shouldn't these two effects cancel each other if the
> frequency-invariance mechanism works well?
No, they won't cancel each other out under our hypothesis that these
workloads are largely latency-bound, since the performance of the
application will increase steadily with increasing frequency, and with
it the amount of computational resources it utilizes per unit of time on
the average, and therefore its frequency-invariant utilization as well.
If you're not convinced by my argument, consider a simple latency-bound
application that repeatedly blocks for t0 on some external agent and
then requires the execution of n1 CPU clocks which cannot be
parallelized with any of the operations occurring during that t0 idle
time. The runtime of a single cycle of that application will be,
assuming that the CPU frequency is f:
T = t0 + n1/f
Its frequency-invariant utilization will approach on the average:
u = (T-t0) / T * f / f1 = n1/f / (t0 + n1/f) * f / f1 = (n1 / f1) / (t0 + n1/f)
with f1 a constant with units of frequency. As you can see the
denominator of the last expression above decreases with frequency,
therefore the frequency-invariant utilization increases, as expected for
an application whose performance is increasing.
Powered by blists - more mailing lists