[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61483e12-c2a2-059a-c157-aea23b8a2f4e@arm.com>
Date: Tue, 26 Apr 2022 08:26:40 +0100
From: Lukasz Luba <lukasz.luba@....com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: linux-kernel@...r.kernel.org, sudeep.holla@....com,
dietmar.eggemann@....com, vincent.guittot@...aro.org,
rafael@...nel.org, rostedt@...dmis.org, mingo@...hat.com
Subject: Re: [PATCH v2] arch_topology: Trace the update thermal pressure
On 4/25/22 15:12, Greg KH wrote:
> On Mon, Apr 25, 2022 at 02:53:17PM +0100, Lukasz Luba wrote:
>> Add trace event to capture the moment of the call for updating the thermal
>> pressure value. It's helpful to investigate how often those events occurs
>> in a system dealing with throttling. This trace event is needed since the
>> old 'cdev_update' might not be used by some drivers. Also, the new trace
>> event shows capacity value, not a cooling state.
>
> Wait, why are thermal events not going through the thermal system so
> that thermal_cdev_update() would catch them? Don't work around broken
> systems, force them to use the correct apis.
In some platforms the thermal is controlled in FW. In Arm we have
recently implemented a logic of our Intelligent Power Allocation
(kernel governor gov_power_allocator.c) (and a bit more) in our
reference FW [1]. So kernel would not controlling thermal.
There is another platform which does similar [2]. As you can see
in that driver, there is no cooling device created, it just handles
the notification about reduced frequency as an IRQ. Then it
populates this information to the task scheduler using thermal
pressure mechanism. Thanks to that the scheduler can avoid mistakes
in the tasks placement and not put more that that CPU real capacity.
For Arm FW thermal, I can make it, since we don't have it yet.
We are in the middle of internal design of this FW/kernel glue and I
haven't sent the kernel patches yet. I will follow your recommendation.
For the driver [2] I have no device, so I cannot change it and prove the
new works safely.
But we need this trace event to discuss the design for RT task scheduler
and impact of thermal pressure decaying delays. We have Android systems
which suffer the RT issues known for years (like audio apps) and we
started to tackle them. That was the motivation for this new trace
event - a helper which everyone can 'just use' in their current code
state and provide measurements.
We can ask maintainers of that driver [2] to follow your guidance and
fix that cpufreq driver. That might require a split of the logic in
there and a new thermal driver which would handle the part of thermal
updates. If that is feasible, since I have heard that some platforms
can cause a huge noise 'KHz' of those interrupt...
If that is true, I might see the reason why someone goes lightweight way
update-thermal-pressure-only and not via thermal cooling complexity.
IMO this trace event doesn't harm the design and brings also benefit
comparing to the 'cdev_update' trace event which only provides a state
value: [0, n]. That state value then needs additional tools to translate
it: state -> freq -> capacity -> thermal pressure. As you can see
this new event just stores proper value in the trace buffer, no need
to hassle, just 'cat' the trace file. If anyone would like to help
us and share it's trace output, we would have everything we need.
Regards,
Lukasz
>
> thanks,
>
> greg k-h
[1] https://github.com/ARM-software/SCP-firmware
[2]
https://elixir.bootlin.com/linux/v5.17/source/drivers/cpufreq/qcom-cpufreq-hw.c#L300
Powered by blists - more mailing lists