[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7d8a3f8d-f369-47dd-8c5f-dcff8d692ea8@samsung.com>
Date: Mon, 10 Feb 2025 19:17:20 +0100
From: Michal Wilczynski <m.wilczynski@...sung.com>
To: Philipp Zabel <p.zabel@...gutronix.de>, Matt Coster
<Matt.Coster@...tec.com>, "mturquette@...libre.com"
<mturquette@...libre.com>, "sboyd@...nel.org" <sboyd@...nel.org>,
"robh@...nel.org" <robh@...nel.org>, "krzk+dt@...nel.org"
<krzk+dt@...nel.org>, "conor+dt@...nel.org" <conor+dt@...nel.org>,
"drew@...7.com" <drew@...7.com>, "guoren@...nel.org" <guoren@...nel.org>,
"wefu@...hat.com" <wefu@...hat.com>, "jassisinghbrar@...il.com"
<jassisinghbrar@...il.com>, "paul.walmsley@...ive.com"
<paul.walmsley@...ive.com>, "palmer@...belt.com" <palmer@...belt.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>, Frank Binns
<Frank.Binns@...tec.com>, "maarten.lankhorst@...ux.intel.com"
<maarten.lankhorst@...ux.intel.com>, "mripard@...nel.org"
<mripard@...nel.org>, "tzimmermann@...e.de" <tzimmermann@...e.de>,
"airlied@...il.com" <airlied@...il.com>, "simona@...ll.ch"
<simona@...ll.ch>, "ulf.hansson@...aro.org" <ulf.hansson@...aro.org>,
"jszhang@...nel.org" <jszhang@...nel.org>, "m.szyprowski@...sung.com"
<m.szyprowski@...sung.com>
Cc: "linux-clk@...r.kernel.org" <linux-clk@...r.kernel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>
Subject: Re: [PATCH v4 09/18] reset: thead: Add TH1520 reset controller
driver
On 2/4/25 18:18, Philipp Zabel wrote:
> On Mo, 2025-02-03 at 19:15 +0100, Michal Wilczynski wrote:
>>
>> On 1/31/25 16:39, Matt Coster wrote:
>>> On 28/01/2025 19:48, Michal Wilczynski wrote:
>>>> Add reset controller driver for the T-HEAD TH1520 SoC that manages
>>>> hardware reset lines for various subsystems. The driver currently
>>>> implements support for GPU reset control, with infrastructure in place
>>>> to extend support for NPU and Watchdog Timer resets in future updates.
>>>>
>>>> Signed-off-by: Michal Wilczynski <m.wilczynski@...sung.com>
>>>> ---
>>>> MAINTAINERS | 1 +
>>>> drivers/reset/Kconfig | 10 ++
>>>> drivers/reset/Makefile | 1 +
>>>> drivers/reset/reset-th1520.c | 178 +++++++++++++++++++++++++++++++++++
>>>> 4 files changed, 190 insertions(+)
>>>> create mode 100644 drivers/reset/reset-th1520.c
>>>>
> [...]
>>>> diff --git a/drivers/reset/reset-th1520.c b/drivers/reset/reset-th1520.c
>>>> new file mode 100644
>>>> index 000000000000..48afbc9f1cdd
>>>> --- /dev/null
>>>> +++ b/drivers/reset/reset-th1520.c
>>>> @@ -0,0 +1,178 @@
> [...]
>>>> +static void th1520_rst_gpu_enable(struct regmap *reg,
>>>> + struct mutex *gpu_seq_lock)
>>>> +{
>>>> + int val;
>>>> +
>>>> + mutex_lock(gpu_seq_lock);
>>>> +
>>>> + /* if the GPU is not in a reset state it, put it into one */
>>>> + regmap_read(reg, TH1520_GPU_RST_CFG, &val);
>>>> + if (val)
>>>> + regmap_update_bits(reg, TH1520_GPU_RST_CFG,
>>>> + TH1520_GPU_RST_CFG_MASK, 0x0);
>
> BIT(2) is not documented, but cleared here.
Yeah shouldn't be cleared, thanks !
>
>>>> +
>>>> + /* rst gpu clkgen */
>>>> + regmap_set_bits(reg, TH1520_GPU_RST_CFG, TH1520_GPU_SW_CLKGEN_RST);
>>>
>>> Do you know what this resets? From our side, the GPU only has a single
>>> reset line (which I assume to be GPU_RESET).
>>
>> This is clock generator reset, as described in the manual 5.4.2.6.1
>> GPU_RST_CFG. It does reside in the same register as the GPU reset line.
>>
>> I think this is required because the MEM clock gate is somehow broken
>> and marked as 'reserved' in manual, so instead as a workaround, since we
>> can't reliably enable the 'mem' clock it's a good idea to reset the
>> whole CLKGEN of the GPU.
>
> If this is a workaround for broken gating of the "mem" clock, would it
> be possible (and reasonable) to make this a separate reset control that
> is handled by the clock driver? ...
Thank you for the detailed feedback, Philipp.
After further consideration, I believe keeping the current reset driver
implementation would be preferable to moving the CLKGEN reset handling
to the clock driver. While it's technically possible to implement this
in the clock driver, I have concerns about the added complexity:
1. We'd need to expose the CLKGEN reset separately in the reset driver
2. The clock driver's dt-bindings would need modification to add an
optional resets property
3. We'd need custom clk_ops for all three clock gates (including a dummy
'mem' gate)
4. Each clock gate's .enable operation would need to handle CLKGEN reset
deassertion
While the clock framework could theoretically handle this, there's no
clean way to express the requirement that the CLKGEN reset should only
be deasserted after all clocks in the group are enabled. We could
implement this explicitly, but it would make the code more complex and
harder to understand.
The current solution in the reset driver is simpler and clearer - it
treats this as what it really is: a TH1520-specific reset sequence.
Looking at other similar SoCs like the BPI-F3, we can see this is truly
THEAD-specific - the BPI-F3 has just a single reset line with no CLKGEN
bit to manage. When you assert/deassert the GPU reset line on the
TH1520, it handles everything needed for a clean reset on this specific
SoC. This keeps the implementation contained and straightforward.
Regarding the delay between clock enable and reset deassert - for SoCs
like BPI-F3 with a single reset line, implementing this in the GPU
consumer driver makes perfect sense. However, for the T-HEAD SoC, moving
the delay there would actually complicate things since we need to manage
both the CLKGEN and GPU reset lines in a specific sequence. Having this
handled entirely within the reset driver keeps the implementation
cleaner.
Does this reasoning align with your thoughts? I'm happy to explore the
clock driver approach further if you still see significant advantages to
that solution.
>
>>>> +
>>>> + /*
>>>> + * According to the hardware manual, a delay of at least 32 clock
>>>> + * cycles is required between de-asserting the clkgen reset and
>>>> + * de-asserting the GPU reset. Assuming a worst-case scenario with
>>>> + * a very high GPU clock frequency, a delay of 1 microsecond is
>>>> + * sufficient to ensure this requirement is met across all
>>>> + * feasible GPU clock speeds.
>>>> + */
>>>> + udelay(1);
>>>
>>> I don't love that this procedure appears in the platform reset driver.
>>> I appreciate it may not be clear from the SoC TRM, but this is the
>>> standard reset procedure for all IMG Rogue GPUs. The currently
>>> supported TI SoC handles this in silicon, when power up/down requests
>>> are sent so we never needed to encode it in the driver before.
>>>
>>> Strictly speaking, the 32 cycle delay is required between power and
>>> clocks being enabled and the reset line being deasserted. If nothing
>>> here touches power or clocks (which I don't think it should), the delay
>>> could potentially be lifted to the GPU driver.
>
> ... This could be expressed as a delay between clk_prepare_enable() and
> reset_control_deassert() in the GPU driver then.
>
>> Yeah you're making excellent points here, I think it would be a good
>> idea to place the delay in the GPU driver, since this is specific to the
>> whole family of the GPU's not the SoC itself.
>>
>>> Is it expected that if a device exposes a reset in devicetree that it
>>> can be cleanly reset without interaction with the device driver itself?
>>> I.E. in this case, is it required that the reset driver alone can cleanly
>>> reset the GPU?
>
> No, the "resets" property should just describe the physical
> connection(s) between reset controller and the device.
>
> It is fine for the device driver to manually assert the reset, enable
> clocks and power, delay, and then deassert the reset, if that is the
> device specific reset procedure.
>
>> I'm not sure what the community as a whole thinks about that, so maybe
>> someone else can answer this, but I would code SoC specific stuff in the
>> reset driver for the SoC, and the GPU specific stuff (like the delay) in
>> the GPU driver code. I wasn't sure whether the delay was specific to the
>> SoC or the GPU so I've put it here.
>
> I agree.
>
> regards
> Philipp
>
Powered by blists - more mailing lists