lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Feb 2016 22:02:49 -0800
From:	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
To:	Doug Smythies <dsmythies@...us.net>,
	"'Rafael J. Wysocki'" <rjw@...ysocki.net>,
	'Linux PM list' <linux-pm@...r.kernel.org>,
	'Ingo Molnar' <mingo@...nel.org>
Cc:	'Linux Kernel Mailing List' <linux-kernel@...r.kernel.org>,
	'Peter Zijlstra' <peterz@...radead.org>,
	'Viresh Kumar' <viresh.kumar@...aro.org>,
	'Juri Lelli' <juri.lelli@....com>,
	'Steve Muckle' <steve.muckle@...aro.org>,
	'Thomas Gleixner' <tglx@...utronix.de>
Subject: Re: [PATCH v6 0/3] cpufreq: Replace timers with utilization update
 callbacks



On 02/10/2016 03:11 PM, Doug Smythies wrote:
> On 2016.02.10 07:17 Rafael J. Wysocki wrote:
>> On Friday, January 29, 2016 11:52:15 PM Rafael J. Wysocki wrote:
>>> The following patch series introduces a mechanism allowing the cpufreq core
>>> and "setpolicy" drivers to provide utilization update callbacks to be invoked
>>> by the scheduler on utilization changes.  Those callbacks can be used to run
>>> the sampling and frequency adjustments code (intel_pstate) or to schedule the
>>> execution of that code in process context (cpufreq core) instead of per-CPU
>>> deferrable timers used in cpufreq today (which Thomas complained about during
>>> the last Kernel Summit).
> This patch set solves a long standing issue with the intel_pstate driver.
> The issue began with the introduction of the "duration" method for deciding
> if the CPU had been idle for a long time resulting in forcing the
> target pstate downwards. Often this was the correct action, but sometimes this
> was the wrong thing to do, because the cpu was actually very busy, but just so
> happened to be idle on jiffy boundaries (perhaps similar to what Steve Muckle
> was referring to on another branch of this thread).
>
> For an idle system, this patch set seems to change the maximum duration from
> 4 seconds to 0.5 seconds for most CPUs. However, when using v1 of patches 1
> and 2 of 3 and v5 of 3 of 3, sometimes the durations (time between passes of
> the intel-pstate driver for a given CPU) of upwards of 120 seconds were observed.
> When patches 1, 2, and 3 of 3 v6 were used, the maximum observed durations of an
> idle system were on the order of 500 milliseconds for most CPUs, but CPU 6
> sometimes went to 3.5 seconds and CPU 7 sometimes went to 4 seconds (small
> sample space, I'll consider to run an overnight test for a much much larger
> sample space). Note 4 seconds, is O.K., and what it was before, I'm just noting
> it is all.
>
> I have a bunch of graphs, if anyone wants to see the supporting data.
>
> My test computer has an older model i7 (Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz)
Thanks Doug. If you have specific workloads, please compare performance.

- Srinivas
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ