lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Jul 2020 09:25:36 -0700
From:   Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
To:     Francisco Jerez <currojerez@...eup.net>,
        "Rafael J. Wysocki" <rafael@...nel.org>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Linux PM <linux-pm@...r.kernel.org>,
        Linux Documentation <linux-doc@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Giovanni Gherdovich <ggherdovich@...e.cz>,
        Doug Smythies <dsmythies@...us.net>
Subject: Re: [PATCH] cpufreq: intel_pstate: Implement passive mode with HWP
 enabled

On Mon, 2020-07-20 at 16:20 -0700, Francisco Jerez wrote:
> "Rafael J. Wysocki" <rafael@...nel.org> writes:
> 
> > On Fri, Jul 17, 2020 at 2:21 AM Francisco Jerez <
> > currojerez@...eup.net> wrote:
> > > "Rafael J. Wysocki" <rafael@...nel.org> writes:
> > > 
{...]

> > Overall, so far, I'm seeing a claim that the CPU subsystem can be
> > made
> > use less energy and do as much work as before (which is what
> > improving
> > the energy-efficiency means in general) if the maximum frequency of
> > CPUs is limited in a clever way.
> > 
> > I'm failing to see what that clever way is, though.
> Hopefully the clarifications above help some.

To simplify:

Suppose I called a function numpy.multiply() to multiply two big arrays
and thread is a pegged to a CPU. Let's say it is causing CPU to
finish the job in 10ms and it is using a P-State of 0x20. But the same
job could have been done in 10ms even if it was using P-state of 0x16.
So we are not energy efficient. To really know where is the bottle neck
there are numbers of perf counters, may be cache was the issue, we
could rather raise the uncore frequency a little. A simple APRF,MPERF
counters are not enough. or we characterize the workload at different
P-states and set limits.
I think this is not you want to say for energy efficiency with your
changes. 

The way you are trying to improve "performance" is by caller (device
driver) to say how important my job at hand. Here device driver suppose
offload this calculations to some GPU and can wait up to 10 ms, you
want to tell CPU to be slow. But the p-state driver at a movement
observes that there is a chance of overshoot of latency, it will
immediately ask for higher P-state. So you want P-state limits based on
the latency requirements of the caller. Since caller has more knowledge
of latency requirement, this allows other devices sharing the power
budget to get more or less power, and improve overall energy efficiency
as the combined performance of system is improved.
Is this correct?

















Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ