[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7785f9c3-93f2-7e7b-9c8d-8f86a0c9a76c@amd.com>
Date: Fri, 13 Jan 2023 12:01:01 +0530
From: Wyes Karny <wyes.karny@....com>
To: Mario Limonciello <mario.limonciello@....com>,
Rafael J Wysocki <rafael@...nel.org>,
Huang Rui <ray.huang@....com>,
Jonathan Corbet <corbet@....net>,
Viresh Kumar <viresh.kumar@...aro.org>, Perry.Yuan@....com,
Ananth Narayan <ananth.narayan@....com>, gautham.shenoy@....com
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, Bagas Sanjaya <bagasdotme@...il.com>,
santosh.shukla@....com
Subject: Re: [PATCH v2 1/6] acpi: cppc: Add min and max perf reg writing
support
Hi Mario,
On 1/13/2023 11:07 AM, Mario Limonciello wrote:
> On 1/12/23 23:21, Wyes Karny wrote:
>> Currently writing of min and max perf register is deferred in
>> cppc_set_perf function. In CPPC guided mode, these registers needed to
>> be written to guide PMFW about min and max perf levels. Add this support
>
> This is generic code, so I think rather than PMFW you should just say "the platform".
>
>> to make guided mode work properly on shared memory systems.
>
> on AMD shared memory systems.
>
>>
>> Signed-off-by: Wyes Karny <wyes.karny@....com>
>
> With the commit message cleaned up:
>
> Reviewed-by: Mario Limonciello <mario.limonciello@....com>
Will do the cleanup part. Thanks for reviewing!
>> ---
>> drivers/acpi/cppc_acpi.c | 16 ++++++++--------
>> 1 file changed, 8 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
>> index 02d83c807271..c936ff503965 100644
>> --- a/drivers/acpi/cppc_acpi.c
>> +++ b/drivers/acpi/cppc_acpi.c
>> @@ -1487,7 +1487,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable);
>> int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
>> {
>> struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
>> - struct cpc_register_resource *desired_reg;
>> + struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg;
>> int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
>> struct cppc_pcc_data *pcc_ss_data = NULL;
>> int ret = 0;
>> @@ -1498,6 +1498,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
>> }
>> desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
>> + min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF];
>> + max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF];
>> /*
>> * This is Phase-I where we want to write to CPC registers
>> @@ -1506,7 +1508,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
>> * Since read_lock can be acquired by multiple CPUs simultaneously we
>> * achieve that goal here
>> */
>> - if (CPC_IN_PCC(desired_reg)) {
>> + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
>> if (pcc_ss_id < 0) {
>> pr_debug("Invalid pcc_ss_id\n");
>> return -ENODEV;
>> @@ -1529,13 +1531,11 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
>> cpc_desc->write_cmd_status = 0;
>> }
>> - /*
>> - * Skip writing MIN/MAX until Linux knows how to come up with
>> - * useful values.
>> - */
>> cpc_write(cpu, desired_reg, perf_ctrls->desired_perf);
>> + cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf);
>> + cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf);
>> - if (CPC_IN_PCC(desired_reg))
>> + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg))
>> up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */
>> /*
>> * This is Phase-II where we transfer the ownership of PCC to Platform
>> @@ -1583,7 +1583,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
>> * case during a CMD_READ and if there are pending writes it delivers
>> * the write command before servicing the read command
>> */
>> - if (CPC_IN_PCC(desired_reg)) {
>> + if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
>> if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */
>> /* Update only if there are pending write commands */
>> if (pcc_ss_data->pending_pcc_write_cmd)
>
--
Thanks & Regards,
Wyes
Powered by blists - more mailing lists