lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160204082538.GB3469@vireshk>
Date:	Thu, 4 Feb 2016 13:55:38 +0530
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	ego@...ux.vnet.ibm.com
Cc:	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Linux PM list <linux-pm@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
	Juri Lelli <juri.lelli@....com>,
	Steve Muckle <steve.muckle@...aro.org>,
	Saravana Kannan <skannan@...eaurora.org>
Subject: Re: [PATCH 3/11] cpufreq: governor: Use common global_dbs_data
 pointer

On 04-02-16, 13:44, Gautham R Shenoy wrote:
> In a a two policy system, to run ondemand on one and conservative on the other,
>  won't the driver have CPUFREQ_HAVE_GOVERNOR_PER_POLICY set?  

No.

CPUFREQ_HAVE_GOVERNOR_PER_POLICY is not about the facility of using
separate governor-type for each policy, that is always available to
the user.

CPUFREQ_HAVE_GOVERNOR_PER_POLICY was initially added for platforms
with different type of CPUs on the same chip, though others can
benefit from it as well.

For example, on a 4 core ARM big LITTLE platform, we will have:
- 2 A7 (low performance/low power)
- 2 A15 (high performance/high power)

The A7's share a policy and A15's share another one.

Without CPUFREQ_HAVE_GOVERNOR_PER_POLICY, if ondemand is selected for
both the policies, the we used to get a single directory (and a set of
tunables) at /sys/devices/system/cpu/cpufreq/ondemand/ .

That used to force us to use same tunables, like sampling rate, etc
for both the policies.

But because the CPUs were so different, we really wanted independent
control.

So, we designed CPUFREQ_HAVE_GOVERNOR_PER_POLICY, so that in such
cases, each policy will have a set of tunables for the same governor
type.

Hope that makes it clear.

If the below questionnaire is still valid, please let me know :)

> If yes, then the changes in this patch won't come into play.
> 
> Also in cpufreq_governor.c, we set cdata->gdbs_data only when 
> !have_governor_per_policy(). cdata->gdbs_data is NULL otherwise.
> A cursory inspection doesn't show any other place in the cpufreq codebase
>  where cdata->gdbs_data is set. Unless I have missed one such initialization, 
> based on my reading of the patch, instead of doing an extra hop to get the 
> governor data via cdata->gdbs_data, we can simply record it in a global
> variable.
> 
> Also, if your concern is regarding the use of show_##file_name##_gov_sys in
> case
> of governor per policy, then the existing code is also broken, since we would
> be 
> accessing _gov##_dbs_cdata.gdbs_data->tuners where _gov##_dbs_cdata.gdbs_data 
> will be NULL!.
> 
> What am I missing ?

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ