lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <881eb7846b0e096e745557eee4a05796278aa63f.camel@mediatek.com>
Date: Tue, 5 Mar 2024 06:23:06 +0000
From: Jason-JH Lin (林睿祥) <Jason-JH.Lin@...iatek.com>
To: CK Hu (胡俊光) <ck.hu@...iatek.com>,
	"jassisinghbrar@...il.com" <jassisinghbrar@...il.com>,
	"matthias.bgg@...il.com" <matthias.bgg@...il.com>, "chunkuang.hu@...nel.org"
	<chunkuang.hu@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mediatek@...ts.infradead.org" <linux-mediatek@...ts.infradead.org>,
	Singo Chang (張興國) <Singo.Chang@...iatek.com>,
	Jason-ch Chen (陳建豪)
	<Jason-ch.Chen@...iatek.com>, Shawn Sung (宋孝謙)
	<Shawn.Sung@...iatek.com>, Nancy Lin (林欣螢)
	<Nancy.Lin@...iatek.com>, Project_Global_Chrome_Upstream_Group
	<Project_Global_Chrome_Upstream_Group@...iatek.com>,
	"linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>,
	"angelogioacchino.delregno@...labora.com"
	<angelogioacchino.delregno@...labora.com>
Subject: Re: [RESEND, PATCH 3/5] soc: mediatek: mtk-cmdq: Add
 cmdq_pkt_poll_addr() function

On Tue, 2024-03-05 at 03:51 +0000, CK Hu (胡俊光) wrote:
> Hi, Jason:
> 
> On Tue, 2024-03-05 at 03:37 +0000, Jason-JH Lin (林睿祥) wrote:
> > On Tue, 2024-03-05 at 03:26 +0000, CK Hu (胡俊光) wrote:
> > > Hi, Jason:
> > > 
> > > On Mon, 2024-03-04 at 16:04 +0000, Jason-JH Lin (林睿祥) wrote:
> > > > Hi CK,
> > > > 
> > > > Thanks for the reviews.
> > > > 
> > > > On Mon, 2024-03-04 at 03:11 +0000, CK Hu (胡俊光) wrote:
> > > > > Hi, Jason:
> > > > > 
> > > > > On Fri, 2024-03-01 at 22:44 +0800, Jason-JH.Lin wrote:
> > > > > > Add cmdq_pkt_poll_addr function to support CMDQ user making
> > > > > > an instruction for polling a specific address of hardware
> > > > > > rigster
> > > > > > to check the value with or without mask.
> > > > > > 
> > > > > > POLL is an old operation in GCE, so it does not support SPR
> > > > > > and
> > > > > > CMDQ_CODE_LOGIC. CMDQ users need to use GPR and
> > > > > > CMDQ_CODE_MASK
> > > > > > to move polling register address to GPR to make an
> > > > > > instruction.
> > > > > > This will be done in cmdq_pkt_poll_addr().
> > > > > > 
> > > > > > Signed-off-by: Jason-JH.Lin <jason-jh.lin@...iatek.com>
> > > > > > ---
> > > > > >  drivers/soc/mediatek/mtk-cmdq-helper.c | 38
> > > > > > ++++++++++++++++++++++++++
> > > > > >  include/linux/soc/mediatek/mtk-cmdq.h  | 16 +++++++++++
> > > > > >  2 files changed, 54 insertions(+)
> > > > > > 
> > > > > > diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c
> > > > > > b/drivers/soc/mediatek/mtk-cmdq-helper.c
> > > > > > index 3a1e47ad8a41..2e9fc9bb1183 100644
> > > > > > --- a/drivers/soc/mediatek/mtk-cmdq-helper.c
> > > > > > +++ b/drivers/soc/mediatek/mtk-cmdq-helper.c
> > > > > > @@ -12,6 +12,7 @@
> > > > > >  
> > > > > >  #define CMDQ_WRITE_ENABLE_MASK	BIT(0)
> > > > > >  #define CMDQ_POLL_ENABLE_MASK	BIT(0)
> > > > > > +#define CMDQ_POLL_HIGH_ADDR_GPR	(14)
> > > > > 
> > > > > I think there are multiple GPR and you use #14 to store high
> > > > > addr.
> > > > > I
> > > > > would like you to list all GPR and do not limit the usage of
> > > > > each
> > > > > GPR.
> > > > > The question is, why limit #14 to be high addr? If the GPR is
> > > > > shared
> > > > > by
> > > > > all threads, there should be a mechanism to manage GPR usage
> > > > > for
> > > > > client
> > > > > driver to allocate/free GPR.
> > > > 
> > > > Yes, there are 16 GPR, from GPR_R0 ~ GPR_R15 and they are
> > > > shared
> > > > by
> > > > all
> > > > threads, but GPR_R0 and GPR_R1 is used by GCE HW itself.
> > > > 
> > > > I think user may not know which GPR is available, so I think
> > > > CMDQ
> > > > driver should manage the usage of GPR instead of configure by
> > > > the
> > > > user.
> > > > 
> > > > Currently, we only use 1 dedicated GPR in poll, so I defined it
> > > > in
> > > > CMDQ
> > > > driver to make it simpler.
> > > 
> > > If thread 1 poll addr A first, GPR is set to A. But poll time
> > > exceed
> > > GCE_THD_SLOT_CYCLES, change to thread 2 and thread 2 poll addr B,
> > > GPR
> > > is set to B. Later switch to thread A and GCE would execute poll
> > > command and GPR is B, so thread 1 would poll addr B, but this is
> > > wrong.
> > > How do you solve this problem?
> > > 
> > 
> > If POLL instruction has timeout, this may be a problem.
> > 
> > But POLL is a legacy operation, it won't context switch when the
> > execute time exceed GCE_THR_SLOT_CYCLES. So we add the comment "All
> > GCE
> > hardware threads will be blocked by this instruction" in the
> > kerneldoc.
> > 
> > And currently, we don't set the GCE hardware timeout in
> > CMDQ_THR_INST_TIMEOUT_CYCLES, so it'll poll forever until the
> > polling
> > value is set...
> 
> If ISP poll time longer than vblank, it may make display disorder.
> When
> I see display disorder, could I find out ISP poll trigger this
> disorder?
> 
If we can get mtk_drm_ddp_irq timeout error when display disorder,
we need to dump all the cmd buffers and current PC in every executing
GCE threads by modifying the CMDQ driver code and see which PC
instruction may cause the problem.

If there is no timeout error when display disorder, then we have no
idea who cause that problem. We can only check the frame sequence id in
ISP driver frame done callback is disorder or not.

Because we don't set CMDQ hardware timeout IRQ or use software timer to
handle software timeout in CMDQ driver currently, so we won't know poll
instruction is blocking. ISP client drivers need to add their timeout
handler for each cmdq_pkt sent to the mailbox channels.

Regards,
Jason-JH.Lin

> Regards,
> CK
> 
> > 
> > Regards,
> > Jason-JH.Lin
> > 
> > > Regards,
> > > CK
> > > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ