[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4da7cd19-4b98-9360-922f-d625c4ec55e0@arm.com>
Date: Wed, 10 Aug 2022 09:30:51 -0500
From: Jeremy Linton <jeremy.linton@....com>
To: Lukasz Luba <lukasz.luba@....com>
Cc: rafael@...nel.org, lenb@...nel.org, viresh.kumar@...aro.org,
robert.moore@...el.com, devel@...ica.org,
linux-acpi@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, vschneid@...hat.com,
Ionela Voinescu <ionela.voinescu@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>
Subject: Re: [PATCH v2 1/1] ACPI: CPPC: Disable FIE if registers in PCC
regions
Hi,
On 8/10/22 07:29, Lukasz Luba wrote:
> Hi Jeremy,
>
> +CC Valentin since he might be interested in this finding
> +CC Ionela, Dietmar
>
> I have a few comments for this patch.
>
>
> On 7/28/22 23:10, Jeremy Linton wrote:
>> PCC regions utilize a mailbox to set/retrieve register values used by
>> the CPPC code. This is fine as long as the operations are
>> infrequent. With the FIE code enabled though the overhead can range
>> from 2-11% of system CPU overhead (ex: as measured by top) on Arm
>> based machines.
>>
>> So, before enabling FIE assure none of the registers used by
>> cppc_get_perf_ctrs() are in the PCC region. Furthermore lets also
>> enable a module parameter which can also disable it at boot or module
>> reload.
>>
>> Signed-off-by: Jeremy Linton <jeremy.linton@....com>
>> ---
>> drivers/acpi/cppc_acpi.c | 41 ++++++++++++++++++++++++++++++++++
>> drivers/cpufreq/cppc_cpufreq.c | 19 ++++++++++++----
>> include/acpi/cppc_acpi.h | 5 +++++
>> 3 files changed, 61 insertions(+), 4 deletions(-)
>
>
> 1. You assume that all platforms would have this big overhead when
> they have the PCC regions for this purpose.
> Do we know which version of HW mailbox have been implemented
> and used that have this 2-11% overhead in a platform?
> Do also more recent MHU have such issues, so we could block
> them by default (like in your code)?
I posted that other email before being awake and conflated MHU with AMU
(which could potentially expose the values directly). But the CPPC code
isn't aware of whether a MHU or some other mailbox is in use. Either
way, its hard to imagine a general mailbox with a doorbell/wait for
completion handshake will ever be fast enough to consider running at the
granularity this code is running at. If there were a case like that, the
kernel would have to benchmark it at runtime to differentiate it from
something that is talking over a slow link to a slowly responding mgmt
processor.
>
> 2. I would prefer to simply change the default Kconfig value to 'n' for
> the ACPI_CPPC_CPUFREQ_FIE, instead of creating a runtime
> check code which disables it.
> We have probably introduce this overhead for older platforms with
> this commit:
>
> commit 4c38f2df71c8e33c0b64865992d693f5022eeaad
> Author: Viresh Kumar <viresh.kumar@...aro.org>
> Date: Tue Jun 23 15:49:40 2020 +0530
>
> cpufreq: CPPC: Add support for frequency invariance
>
>
>
> If the test server with this config enabled performs well
> in the stress-tests, then on production server the config may be
> set to 'y' (or 'm' and loaded).
>
> I would vote to not add extra code, which then after a while might be
> decided to bw extended because actually some HW is actually capable (so
> we could check in runtime and enable it). IMO this create an additional
> complexity in our diverse configuration/tunnable space in our code.
>
> When we don't compile-in this, we should fallback to old-style
> FIE, which has been used on these old platforms.
>
> BTW (I have to leave it here) the first-class solution for those servers
> is to implement AMU counters, so the overhead to retrieve this info is
> really low.
>
> Regards,
> Lukasz
Powered by blists - more mailing lists