lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fe9b5d8d-5cbb-4d0c-bd12-b15a8d44062d@kernel.org>
Date: Wed, 19 Feb 2025 12:05:57 -0600
From: Mario Limonciello <superm1@...nel.org>
To: "Gautham R. Shenoy" <gautham.shenoy@....com>
Cc: Perry Yuan <perry.yuan@....com>,
 Dhananjay Ugwekar <Dhananjay.Ugwekar@....com>,
 "open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
 <linux-kernel@...r.kernel.org>,
 "open list:CPU FREQUENCY SCALING FRAMEWORK" <linux-pm@...r.kernel.org>,
 Mario Limonciello <mario.limonciello@....com>
Subject: Re: [PATCH v3 17/18] cpufreq/amd-pstate: Rework CPPC enabling

On 2/19/2025 09:25, Gautham R. Shenoy wrote:
> On Mon, Feb 17, 2025 at 04:07:06PM -0600, Mario Limonciello wrote:
>> From: Mario Limonciello <mario.limonciello@....com>
>>
>> The CPPC enable register is configured as "write once".  That is
>> any future writes don't actually do anything.
>>
>> Because of this, all the cleanup paths that currently exist for
>> CPPC disable are non-effective.
>>
>> Rework CPPC enable to only enable after all the CAP registers have
>> been read to avoid enabling CPPC on CPUs with invalid _CPC or
>> unpopulated MSRs.
>>
>> As the register is write once, remove all cleanup paths as well.
>>
>> Signed-off-by: Mario Limonciello <mario.limonciello@....com>
>> ---
>> v3:
>>   * Fixup for suspend/resume issue
>> ---
> [..snip..]
> 
>>   
>> -static int shmem_cppc_enable(bool enable)
>> +static int shmem_cppc_enable(struct cpufreq_policy *policy)
>>   {
>> -	int cpu, ret = 0;
>> +	struct amd_cpudata *cpudata = policy->driver_data;
>>   	struct cppc_perf_ctrls perf_ctrls;
>> +	int ret;
>>   
>> -	if (enable == cppc_enabled)
>> -		return 0;
>> +	ret = cppc_set_enable(cpudata->cpu, 1);
>> +	if (ret)
>> +		return ret;
>>   
>> -	for_each_present_cpu(cpu) {
>> -		ret = cppc_set_enable(cpu, enable);
>> +	/* Enable autonomous mode for EPP */
>> +	if (cppc_state == AMD_PSTATE_ACTIVE) {
>> +		/* Set desired perf as zero to allow EPP firmware control */
>> +		perf_ctrls.desired_perf = 0;
>> +		ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
>>   		if (ret)
>>   			return ret;
> 
> We don't need the if condition here. There is nothing following this
> inside the if block and the function return "ret" soon after coming
> out of this if block.
> 
> 
>> -
>> -		/* Enable autonomous mode for EPP */
>> -		if (cppc_state == AMD_PSTATE_ACTIVE) {
>> -			/* Set desired perf as zero to allow EPP firmware control */
>> -			perf_ctrls.desired_perf = 0;
>> -			ret = cppc_set_perf(cpu, &perf_ctrls);
>> -			if (ret)
>> -				return ret;
>> -		}
>>   	}
>>   
>> -	cppc_enabled = enable;
>>   	return ret;
>>   }
>>   
>>   DEFINE_STATIC_CALL(amd_pstate_cppc_enable, msr_cppc_enable);
>>   
>> -static inline int amd_pstate_cppc_enable(bool enable)
> [..snip..]
> 
>>   
>> -static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
>> +static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
>>   {
>>   	struct amd_cpudata *cpudata = policy->driver_data;
>>   	union perf_cached perf = READ_ONCE(cpudata->perf);
>>   	int ret;
>>   
>> -	ret = amd_pstate_cppc_enable(true);
>> -	if (ret)
>> -		pr_err("failed to enable amd pstate during resume, return %d\n", ret);
>> -
>> -
>> -	return amd_pstate_epp_update_limit(policy);
>> -}
>> +	pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
>>   
>> -static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
>> -{
>> -	struct amd_cpudata *cpudata = policy->driver_data;
>> -	int ret;
>> +	ret = amd_pstate_cppc_enable(policy);
>> +	if (ret)
>> +		return ret;
>>   
>> -	pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
>>   
>> -	ret = amd_pstate_epp_reenable(policy);
>> +	ret = amd_pstate_update_perf(policy, 0, 0, perf.highest_perf, cpudata->epp_cached, false);
> 
> Previously, when a CPU came online, the callpath would be
> amd_pstate_epp_cpu_online(policy)
> --> amd_pstate_epp_reenable(policy)
>       --> amd_pstate_epp_update_limit(policy)
>            --> amd_pstate_epp_update_limit(policy)
> 
> which reevaluates the min_perf_limit and max_perf_limit based on
> policy->min and policy->max and then calls
> 
>        amd_pstate_update_perf(policy, min_limit_perf, 0, max_limit_perf, epp, false)
> 
> With this patch, we call
> 
>        amd_pstate_update_perf(policy, 0, 0, perf.highest_perf, cpudata->epp_cached, false);
> 
> which would set CPPC.min_perf to 0.
> 
> I guess this should be ok since cpufreq_online() would eventually call
> amd_pstate_verify() and amd_pstate_epp_set_policy() which should
> re-initialize the the min_limit_perf and max_limit_perf. Though I
> haven't verified if the behaviour changes with this patch when the CPU
> is offlined and brought back online.

I'll double check with removing amd_pstate_update_perf() call from 
amd_pstate_epp_cpu_online().  I think it will be clearer to let it get 
set from the amd_pstate_epp_set_policy() call.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ