[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65e596c9-a472-d0b8-ec2f-6935d647ddb2@collabora.com>
Date: Thu, 19 Jan 2023 16:01:34 +0300
From: Dmitry Osipenko <dmitry.osipenko@...labora.com>
To: Thierry Reding <thierry.reding@...il.com>
Cc: Sumit Gupta <sumitg@...dia.com>,
Dmitry Osipenko <digetx@...il.com>, treding@...dia.com,
krzysztof.kozlowski@...aro.org, viresh.kumar@...aro.org,
rafael@...nel.org, jonathanh@...dia.com, robh+dt@...nel.org,
linux-kernel@...r.kernel.org, linux-tegra@...r.kernel.org,
linux-pm@...r.kernel.org, devicetree@...r.kernel.org,
sanjayc@...dia.com, ksitaraman@...dia.com, ishah@...dia.com,
bbasu@...dia.com, Rajkumar Kasirajan <rkasirajan@...dia.com>
Subject: Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set
bandwidth
On 1/19/23 13:26, Thierry Reding wrote:
> On Mon, Jan 16, 2023 at 03:16:48PM +0300, Dmitry Osipenko wrote:
>> On 1/13/23 16:50, Sumit Gupta wrote:
>>>
>>>
>>> On 22/12/22 21:16, Dmitry Osipenko wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>>>>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>>>>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>>>>> and enable only those DT OPP's which are present in LUT also.
>>>>>
>>>>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>>>>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>>>>> number of MC channels which can vary as per the boot configuration.
>>>>> This per channel bandwidth from OPP table will be later converted by
>>>>> MC driver to final bandwidth value by multiplying with number of
>>>>> channels before sending the request to BPMP-FW.
>>>>>
>>>>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>>>>> as the frequency table and not do the DRAM frequency scaling which is
>>>>> same as the current behavior.
>>>>>
>>>>> Now, as the CPU Frequency table is being controlling through OPP table
>>>>> in DT. Keeping fewer entries in the table will create less frequency
>>>>> steps and scale fast to high frequencies if required.
>>>>
>>>> It's not exactly clear what you're doing here. Are you going to scale
>>>> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
>>>> is independent from the memory subsystem.
>>>>
>>>> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
>>>> activity and CPU memory BW should be scaled based on CPU memory events
>>>> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
>>>> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
>>>> API for that.
>>>>
>>>
>>> Yes, scaling the memory BW based on CPU freq.
>>> Referred below patch set for previous generation of Tegra Soc's which
>>> you mentioned and tried to trace the history.
>>>
>>> https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
>>>
>>> In new Tegra Soc's, actmon counter control and usage has been moved to
>>> BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
>>> Using the actmon counter was a reactive way to scale the frequency which
>>> is less effective due to averaging over a time period.
>>> We are now using the proactive way where clients tell their bandwidth
>>> needs to help achieve better performance.
>>
>> You don't know what bandwidth CPU needs, you trying to guess it.
>>
>> It should be a bad decision to use cpufreq for memory bandwidth scaling.
>> You'll be wasting memory power 90% of time because cpufreq doesn't have
>> relation to the DRAM, your heuristics will be wrong and won't do
>> anything good compared to using ACTMON. The L2 CPU cache + memory
>> prefetching hides memory from CPU. And cpufreq should be less reactive
>> than ACTMON in general.
>>
>> Scaling memory freq based on cpufreq is what downstream NV kernel did
>> 10+ years ago for the oldest Tegra generations. Today upstream has all
>> the necessary infrastructure for doing memory bandwidth scaling properly
>> and we even using h/w memory counters on T20. It's strange that you want
>> to bring the downstream archaity to the modern upstream for the latest
>> Tegra generations.
>>
>> If you can skip the BPMP-FW and use ACTMON directly from kernel, then
>> that's what I suggest to do.
>
> After talking to a few people, it turns out that BPMP is already using
> ACTMON internally to do the actual scaling of the EMC frequency (or the
> CPUs contribution to that). So BPMP will use ACTMON counters to monitor
> the effective memory load of the CPU and adjust the EMC frequency. The
> bandwidth request that we generate from the cpufreq driver is more of a
> guideline for the maximum bandwidth we might consume.
Our kernel ACTMON driver uses cpufreq for guiding the EMC freq. Driving
EMC rate solely based on cpufreq would be a not good decision. So does
it mean you're now going to extend the ACTMON driver with the BPMP support?
--
Best regards,
Dmitry
Powered by blists - more mailing lists