[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4748270.31r3eYUQgx@jernej-laptop>
Date: Sun, 03 Jul 2022 20:43:48 +0200
From: Jernej Škrabec <jernej.skrabec@...il.com>
To: samuel@...lland.org, Roman Stratiienko <r.stratiienko@...il.com>
Cc: peron.clem@...il.com, mturquette@...libre.com, sboyd@...nel.org,
mripard@...nel.org, wens@...e.org, linux-clk@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-sunxi@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Roman Stratiienko <r.stratiienko@...il.com>
Subject: Re: [PATCH v2] clk: sunxi-ng: sun50i: h6: Modify GPU clock configuration to support DFS
Dne nedelja, 03. julij 2022 ob 18:45:14 CEST je Roman Stratiienko napisal(a):
> Using simple bash script it was discovered that not all CCU registers
> can be safely used for DFS, e.g.:
>
> while true
> do
> devmem 0x3001030 4 0xb0003e02
> devmem 0x3001030 4 0xb0001e02
> done
>
> Script above changes the GPU_PLL multiplier register value. While the
> script is running, the user should interact with the user interface.
>
> Using this method the following results were obtained:
> | Register | Name | Bits | Values | Result |
> | -- | -- | -- | -- | -- |
> | 0x3001030 | GPU_PLL.MULT | 15..8 | 20-62 | OK |
> | 0x3001030 | GPU_PLL.INDIV | 1 | 0-1 | OK |
> | 0x3001030 | GPU_PLL.OUTDIV | 0 | 0-1 | FAIL |
> | 0x3001670 | GPU_CLK.DIV | 3..0 | ANY | FAIL |
>
> DVFS started to work seamlessly once dividers which caused the
> glitches were set to fixed values.
>
> Signed-off-by: Roman Stratiienko <r.stratiienko@...il.com>
>
> ---
>
> Changelog:
>
> V2:
> - Drop changes related to mux
> - Drop frequency limiting
> - Add unused dividers initialization
> ---
> drivers/clk/sunxi-ng/ccu-sun50i-h6.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
> b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c index 2ddf0a0da526f..1b0205ff24108
> 100644
> --- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
> +++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
> @@ -95,13 +95,13 @@ static struct ccu_nkmp pll_periph1_clk = {
> },
> };
>
> +/* For GPU PLL, using an output divider for DFS causes system to fail */
> #define SUN50I_H6_PLL_GPU_REG 0x030
> static struct ccu_nkmp pll_gpu_clk = {
> .enable = BIT(31),
> .lock = BIT(28),
> .n = _SUNXI_CCU_MULT_MIN(8, 8, 12),
> .m = _SUNXI_CCU_DIV(1, 1), /* input divider */
> - .p = _SUNXI_CCU_DIV(0, 1), /* output divider
*/
Having minimum (288 MHz) as per vendor GPU driver and maximum, either max. opp
or max. from datasheet is equally good. I know that both are basically limited
with opp table, but people like to play with these, so it's good to have them
in.
> .common = {
> .reg = 0x030,
> .hw.init = CLK_HW_INIT("pll-gpu", "osc24M",
> @@ -294,9 +294,9 @@ static SUNXI_CCU_M_WITH_MUX_GATE(deinterlace_clk,
> "deinterlace", static SUNXI_CCU_GATE(bus_deinterlace_clk,
> "bus-deinterlace", "psi-ahb1-ahb2", 0x62c, BIT(0), 0);
>
> +/* Keep GPU_CLK divider const to avoid DFS instability. */
> static const char * const gpu_parents[] = { "pll-gpu" };
> -static SUNXI_CCU_M_WITH_MUX_GATE(gpu_clk, "gpu", gpu_parents, 0x670,
> - 0, 3, /* M */
> +static SUNXI_CCU_MUX_WITH_GATE(gpu_clk, "gpu", gpu_parents, 0x670,
> 24, 1, /* mux */
> BIT(31), /* gate */
> CLK_SET_RATE_PARENT);
> @@ -1193,6 +1193,16 @@ static int sun50i_h6_ccu_probe(struct platform_device
> *pdev) if (IS_ERR(reg))
> return PTR_ERR(reg);
>
> + /* Force PLL_GPU output divider to 0 */
Divider 0 here
> + val = readl(reg + SUN50I_H6_PLL_GPU_REG);
> + val &= ~BIT(0);
> + writel(val, reg + SUN50I_H6_PLL_GPU_REG);
> +
> + /* Force GPU_CLK divider to 0 */
and here sounds wrong, since division by zero is not defined. Using 1 is more
intuitive and correct, since that's what HW actually uses.
Patch looks good otherwise.
Best regards,
Jernej
> + val = readl(reg + gpu_clk.common.reg);
> + val &= ~GENMASK(3, 0);
> + writel(val, reg + gpu_clk.common.reg);
> +
> /* Enable the lock bits on all PLLs */
> for (i = 0; i < ARRAY_SIZE(pll_regs); i++) {
> val = readl(reg + pll_regs[i]);
Powered by blists - more mailing lists