lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f62e5ac5-313b-4668-a6ab-a683e5ddfcda@amd.com>
Date: Thu, 10 Apr 2025 10:11:53 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Mario Limonciello <mario.limonciello@....com>, "Gautham R. Shenoy"
	<gautham.shenoy@....com>, "Rafael J. Wysocki" <rafael@...nel.org>, "Viresh
 Kumar" <viresh.kumar@...aro.org>, <linux-pm@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
CC: Dhananjay Ugwekar <Dhananjay.Ugwekar@....com>, Huang Rui
	<ray.huang@....com>, Perry Yuan <perry.yuan@....com>, Meng Li
	<li.meng@....com>
Subject: Re: [PATCH] cpufreq/amd-pstate: Enable ITMT support after
 initializing core rankings

Hello Mario,

On 4/10/2025 2:28 AM, Mario Limonciello wrote:

[..snip..]

>>   #define CPPC_MAX_PERF    U8_MAX
>> -static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
>> +static void amd_pstate_init_asym_prio(struct amd_cpudata *cpudata)
> 
> I think the previous function name was fine.
> 
> 1) It still does set cpudata->hw_prefcore afterall and
> 2) We still have an amd_detect_prefcore() that is used to determine whether amd_pstate_prefcore is set.

Ack. I'll change it back in v2.

> 
>>   {
>>       /* user disabled or not detected */
>>       if (!amd_pstate_prefcore)
>> @@ -814,14 +804,8 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
>>       cpudata->hw_prefcore = true;
>> -    /*
>> -     * The priorities can be set regardless of whether or not
>> -     * sched_set_itmt_support(true) has been called and it is valid to
>> -     * update them at any time after it has been called.
>> -     */
>> +    /* The priorities must be initialized before ITMT support can be toggled on. */
>>       sched_set_itmt_core_prio((int)READ_ONCE(cpudata->prefcore_ranking), cpudata->cpu);
>> -
>> -    schedule_work(&sched_prefcore_work);
>>   }
>>   static void amd_pstate_update_limits(unsigned int cpu)
>> @@ -974,7 +958,7 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
>>       if (ret)
>>           goto free_cpudata1;
>> -    amd_pstate_init_prefcore(cpudata);
>> +    amd_pstate_init_asym_prio(cpudata);
>>       ret = amd_pstate_init_freq(cpudata);
>>       if (ret)
>> @@ -1450,7 +1434,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
>>       if (ret)
>>           goto free_cpudata1;
>> -    amd_pstate_init_prefcore(cpudata);
>> +    amd_pstate_init_asym_prio(cpudata);
>>       ret = amd_pstate_init_freq(cpudata);
>>       if (ret)
>> @@ -1780,6 +1764,10 @@ static int __init amd_pstate_init(void)
>>           }
>>       }
>> +    /* Enable ITMT support once all CPUs have initialized their asym priorities. */
>> +    if (amd_pstate_prefcore)
>> +        sched_set_itmt_support();
>> +
> 
> Hmm, by moving it after the first registration that has the side effect that if you changed driver modes from active to passive (for example) ITMT priorities stay identical and aren't updated.
> I guess that makes sense since the rankings /shouldn't/ change.

Currently, when amd-pstate unregisters during mode switch, ITMT remains
enabled and if the rankings change before the new driver is registered,
update_limits() is never called and that too can cause issues.

Based on discussion with Dhananjay, and the fact that one can mode
switch to disable amd-pstate completely, What are your thoughts on this
addition diff:

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 40d908188b78..320b9551947e 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1177,6 +1177,9 @@ static ssize_t show_energy_performance_preference(
  
  static void amd_pstate_driver_cleanup(void)
  {
+	if (amd_pstate_prefcore)
+		sched_clear_itmt_support();
+
  	cppc_state = AMD_PSTATE_DISABLE;
  	current_pstate_driver = NULL;
  }
@@ -1219,6 +1222,10 @@ static int amd_pstate_register_driver(int mode)
  		return ret;
  	}
  
+	/* Enable ITMT support once all CPUs have initialized their asym priorities. */
+	if (amd_pstate_prefcore)
+		sched_set_itmt_support();
+
  	return 0;
  }
  
@@ -1761,10 +1768,6 @@ static int __init amd_pstate_init(void)
  		}
  	}
  
-	/* Enable ITMT support once all CPUs have initialized their asym priorities. */
-	if (amd_pstate_prefcore)
-		sched_set_itmt_support();
-
  	return ret;
  
  global_attr_free:
--

This way, when the new driver registers, it can repopulate the rankings
and then the sched domain rebuild will get everything right. The only
concern is that disabling ITMT support during mode switch will cause the
sched domains to be rebuilt twice but I'm assuming mode switch is a rare
operation.

If disabling and re-enabling ITMT support during mode switch is a
concern, I can work something into my dynamic asym priority support
series to detect any changes to the ranking during mode switch and use
sched_update_asym_prefer_cpu() to update the "asym_prefer_cpu" that way.

Let me know your thoughts.

> 
> I feel this should be OK, thanks.
> 
>>       return ret;
>>   global_attr_free:
>>
>> base-commit: 56a49e19e1aea1374e9ba58cfd40260587bb7355
> 

-- 
Thanks and Regards,
Prateek


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ