[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dcdc61d0-a979-b746-6259-48a67175c675@collabora.com>
Date: Mon, 26 Sep 2022 17:21:07 +0200
From: AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>
To: Yongqiang Niu <yongqiang.niu@...iatek.com>,
CK Hu <ck.hu@...iatek.com>,
Chun-Kuang Hu <chunkuang.hu@...nel.org>
Cc: Jassi Brar <jassisinghbrar@...il.com>,
Matthias Brugger <matthias.bgg@...il.com>,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-mediatek@...ts.infradead.org,
Project_Global_Chrome_Upstream_Group@...iatek.com,
Hsin-Yi Wang <hsinyi@...omium.org>
Subject: Re: [RESEND PATCH v3] mailbox: mtk-cmdq: fix gce timeout issue
Il 26/09/22 11:02, Yongqiang Niu ha scritto:
> 1. enable gce ddr enable(gce reigster offset 0x48, bit 16 to 18) when gce work,
> and disable gce ddr enable when gce work job done
> 2. split cmdq clk enable/disable api, and control gce ddr enable/disable
> in clk enable/disable function to make sure it could protect when cmdq
> is multiple used by display and mdp
>
> this is only for some SOC which has flag "control_by_sw".
> for this kind of gce, there is a handshake flow between gce and ddr
> hardware,
> if not set ddr enable flag of gce, ddr will fall into idle mode,
> then gce instructions will not process done.
> we need set this flag of gce to tell ddr when gce is idle or busy
> controlled by software flow.
>
> ddr problem is a special case.
> when test suspend/resume case, gce sometimes will pull ddr, and ddr can
> not go to suspend.
> if we set gce register 0x48 to 0x7, will fix this gce pull ddr issue,
> as you have referred [1] and [2] (8192 and 8195)
> but for mt8186, the gce is more special, except setting of [1] and [2],
> we need add more setting set gce register 0x48 to (0x7 << 16 | 0x7)
> when gce working to make sure gce could process all instructions ok.
> this case just need normal bootup, if we not set this, display cmdq
> task will timeout, and chrome homescreen will always black screen.
>
> and with this patch, we have done these test on mt8186:
> 1.suspend/resume
> 2.boot up to home screen
> 3.playback video with youtube.
>
> suspend issue is special gce hardware issue, gce client driver
> command already process done, but gce still pull ddr.
>
> Signed-off-by: Yongqiang Niu <yongqiang.niu@...iatek.com>
> ---
> change sinc v2:
> 1. add definition GCE_CTRL_BY_SW and GCE_DDR_EN instead of magic number
> ---
>
> ---
> drivers/mailbox/mtk-cmdq-mailbox.c | 68 +++++++++++++++++++++++++++---
> 1 file changed, 63 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
> index 9465f9081515..bd63773b05fd 100644
> --- a/drivers/mailbox/mtk-cmdq-mailbox.c
> +++ b/drivers/mailbox/mtk-cmdq-mailbox.c
> @@ -38,6 +38,8 @@
> #define CMDQ_THR_PRIORITY 0x40
>
> #define GCE_GCTL_VALUE 0x48
> +#define GCE_CTRL_BY_SW GENMASK(18, 16)
> +#define GCE_DDR_EN GENMASK(2, 0)
>
> #define CMDQ_THR_ACTIVE_SLOT_CYCLES 0x3200
> #define CMDQ_THR_ENABLED 0x1
> @@ -80,16 +82,60 @@ struct cmdq {
> bool suspended;
> u8 shift_pa;
> bool control_by_sw;
> + bool sw_ddr_en;
> u32 gce_num;
> + atomic_t usage;
> + spinlock_t lock;
> };
>
> struct gce_plat {
> u32 thread_nr;
> u8 shift;
> bool control_by_sw;
> + bool sw_ddr_en;
> u32 gce_num;
> };
>
> +static s32 cmdq_clk_enable(struct cmdq *cmdq)
> +{
> + s32 usage, ret;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&cmdq->lock, flags);
All this locking is avoidable on all SoCs where (sw_ddr_en == false), which means
that this is needed only for one SoC (MT8186).
You can solve that by adding a callback pointer on the gce_plat data, so that
we get something like:
static int cmdq_clk_swddr_enable(struct cmdq *cmdq)
{
lock, atomic_inc, clk_bulk_enable, writel(....);
}
static int cmdq_clk_enable(struct cmdq *cmdq)
{
return clk_bulk_enable(cmdq->gce_num, cmdq->clocks);
};
static const struct gce_plat gce_plat_v7 = {
...........
.clk_enable = cmdq_clk_swddr_enable,
.clk_disable = cmdq_clk_swddr_disable,
..........
};
Please care about older SoCs' performance.
Regards,
Angelo
Powered by blists - more mailing lists