lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3ccbf872-355c-4fec-ab9a-6f2f396625d7@samsung.com>
Date: Sat, 12 Apr 2025 16:10:57 +0200
From: Michal Wilczynski <m.wilczynski@...sung.com>
To: Ulf Hansson <ulf.hansson@...aro.org>, Frank Binns
	<frank.binns@...tec.com>, Matt Coster <matt.coster@...tec.com>
Cc: Stephen Boyd <sboyd@...nel.org>, robh@...nel.org, krzk+dt@...nel.org,
	conor+dt@...nel.org, drew@...7.com, guoren@...nel.org, wefu@...hat.com,
	p.zabel@...gutronix.de, m.szyprowski@...sung.com,
	linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
	linux-pm@...r.kernel.org, "devicetree@...r.kernel.org"
	<devicetree@...r.kernel.org>
Subject: Re: [PATCH v1 1/2] dt-bindings: firmware: thead,th1520: Add clocks
 and resets



On 4/12/25 09:53, Michal Wilczynski wrote:
> 
> 
> On 4/10/25 14:34, Ulf Hansson wrote:
>> On Thu, 10 Apr 2025 at 12:42, Michal Wilczynski
>> <m.wilczynski@...sung.com> wrote:
>>>
>>>
>>>
>>> On 4/9/25 12:41, Ulf Hansson wrote:
>>>> On Wed, 9 Apr 2025 at 11:30, Michal Wilczynski <m.wilczynski@...sung.com> wrote:
>>>>>
>>>>> Prepare for handling GPU clock and reset sequencing through a generic
>>>>> power domain by adding clock and reset properties to the TH1520 AON
>>>>> firmware bindings.
>>>>>
>>>>> The T-HEAD TH1520 GPU requires coordinated management of two clocks
>>>>> (core and sys) and two resets (GPU and GPU CLKGEN). Due to SoC-specific
>>>>> requirements, the CLKGEN reset must be carefully managed alongside clock
>>>>> enables to ensure proper GPU operation, as discussed on the mailing list
>>>>> [1].
>>>>>
>>>>> Since the coordination is now handled through a power domain, only the
>>>>> programmable clocks (core and sys) are exposed. The GPU MEM clock is
>>>>> ignored, as it is not controllable on the TH1520 SoC.
>>>>>
>>>>> This approach follows upstream maintainers' recommendations [1] to
>>>>> avoid SoC-specific details leaking into the GPU driver or clock/reset
>>>>> frameworks directly.
>>>>>
>>>>> [1] - https://lore.kernel.org/all/38d9650fc11a674c8b689d6bab937acf@kernel.org/
>>>>>
>>>>> Signed-off-by: Michal Wilczynski <m.wilczynski@...sung.com>
>>>>> ---
>>>>>  .../bindings/firmware/thead,th1520-aon.yaml   | 28 +++++++++++++++++++
>>>>>  1 file changed, 28 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml b/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
>>>>> index bbc183200400..8075874bcd6b 100644
>>>>> --- a/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
>>>>> +++ b/Documentation/devicetree/bindings/firmware/thead,th1520-aon.yaml
>>>>> @@ -25,6 +25,16 @@ properties:
>>>>>    compatible:
>>>>>      const: thead,th1520-aon
>>>>>
>>>>> +  clocks:
>>>>> +    items:
>>>>> +      - description: GPU core clock
>>>>> +      - description: GPU sys clock
>>>>> +
>>>>> +  clock-names:
>>>>> +    items:
>>>>> +      - const: gpu-core
>>>>> +      - const: gpu-sys
>>>>
>>>> These clocks don't look like they belong to the power-domain node, but
>>>> rather the GPU's node.
>>>>
>>>> Or is this in fact the correct description of the HW?
>>>
>>> Hi,
>>> Thank you for your input. Based on my understanding of Stephen
>>> presentation the power-domain layer could act as a middleware layer
>>> (like ACPI) that could own resources. That being said it was also stated
>>> that the proposed approach should work with already existing device
>>> trees, which implies that the DT should remain as is.
>>>
>>> So I could get the resources using attach_dev and detach_dev, but there
>>> are two problems with that:
>>>
>>> 1) The GPU driver will try to manage clocks/reset on it's own using those functions
>>>    if I provide non-stub working clocks and reset:
>>> static const struct dev_pm_ops pvr_pm_ops = {
>>>         RUNTIME_PM_OPS(pvr_power_device_suspend, pvr_power_device_resume,
>>>                        pvr_power_device_idle)
>>> };
>>>
>>> So obviously I should invent a way to tell the drm/imagination driver to
>>> NOT manage. One obvious way to do this is to introduce new flag to genpd.flags
>>> called let's say GENPD_FLAG_EXCLUSIVE_CONTROL, which would tell the consumer
>>> driver that the power management is being done only done from the PM
>>> middleware driver.
>>
>> Something along those lines. Although, I think the below twist to the
>> approach would be better.
>>
>> Some flag (maybe just a bool) should be set dynamically when the
>> ->attach_dev() callback is invoked and it should be a per device flag,
>> not a per genpd flag. In this way, the genpd provider driver can make
>> runtime decisions, perhaps even based on some DT compatible string for
>> the device being attached to it, whether it should manage PM resources
>> or not.
>>
>> Additionally, we need a new genpd helper function that allows the
>> consumer driver to check if the PM resources are managed from the PM
>> domain level (genpd) or not.
>>
>> If it sounds complicated, just let me know I can try to help put the
>> pieces together.
> 
> Thanks, this sounds doable
> 
>>
>>>
>>> 2) The GPU node doesn't want to own the gpu-clkgen reset. In fact nobody
>>>    seems to want to own it, even though theoretically it should be owned by
>>>    the clk_vo as this would describe the hardware best (it's resetting the
>>>    GPU clocks). But then it would be trickier to get it from the PM driver,
>>>    making the code more complex and harder to understand. Nonetheless I
>>>    think it would work.
>>
>> I guess it doesn't really matter to me. Perhaps model it as a reset
>> and make the GPU be the consumer of it?
> 
> GPU driver maintainers already stated that they only want to consume a
> single reset line, that would be 'gpu' [1]. The 'gpu-clkgen' is an orphan in
> this situation, or a part of a SoC specific-glue code, so theoretically
> since the PM driver in our case is also a SoC glue driver we could leave
> the 'gpu-clkgen' in PM DT node.

Frank, Matt
Just to make sure, I would like to ask what you think about this ? Would
you be open to have an extra reset that would be managed from the
generic PM driver in your GPU DT node ?

Regards,
Michał

> 
> [1] - https://lore.kernel.org/all/816db99d-7088-4c1a-af03-b9a825ac09dc@imgtec.com/
>>
>>>
>>> If this sounds good to you I will work on the code.
>>
>> Sure, let's give this a try - I am here to help review and guide the best I can.
> 
> Thank you very much for your support, it’s invaluable!
> 
>>
>> [...]
>>
>> Kind regards
>> Uffe
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ