[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOMZO5DgnB0kuSTxg1=ngJYiRvbq6bqBC4K-R5nQMzEinBYq7A@mail.gmail.com>
Date: Tue, 8 Oct 2019 13:11:19 -0300
From: Fabio Estevam <festevam@...il.com>
To: Rob Clark <robdclark@...il.com>
Cc: DRI mailing list <dri-devel@...ts.freedesktop.org>,
Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
"open list:DRM DRIVER FOR MSM ADRENO GPU"
<linux-arm-msm@...r.kernel.org>,
"open list:DRM DRIVER FOR MSM ADRENO GPU"
<freedreno@...ts.freedesktop.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] drm/msm: Use the correct dma_sync calls harder
Hi Rob,
On Wed, Sep 4, 2019 at 2:19 PM Rob Clark <robdclark@...il.com> wrote:
>
> From: Rob Clark <robdclark@...omium.org>
>
> Looks like the dma_sync calls don't do what we want on armv7 either.
> Fixes:
>
> Unable to handle kernel paging request at virtual address 50001000
> pgd = (ptrval)
> [50001000] *pgd=00000000
> Internal error: Oops: 805 [#1] SMP ARM
> Modules linked in:
> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc6-00271-g9f159ae07f07 #4
> Hardware name: Freescale i.MX53 (Device Tree Support)
> PC is at v7_dma_clean_range+0x20/0x38
> LR is at __dma_page_cpu_to_dev+0x28/0x90
> pc : [<c011c76c>] lr : [<c01181c4>] psr: 20000013
> sp : d80b5a88 ip : de96c000 fp : d840ce6c
> r10: 00000000 r9 : 00000001 r8 : d843e010
> r7 : 00000000 r6 : 00008000 r5 : ddb6c000 r4 : 00000000
> r3 : 0000003f r2 : 00000040 r1 : 50008000 r0 : 50001000
> Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
> Control: 10c5387d Table: 70004019 DAC: 00000051
> Process swapper/0 (pid: 1, stack limit = 0x(ptrval))
>
> Signed-off-by: Rob Clark <robdclark@...omium.org>
> Fixes: 3de433c5b38a ("drm/msm: Use the correct dma_sync calls in msm_gem")
> Tested-by: Fabio Estevam <festevam@...il.com>
I see this one got applied in linux-next already.
Could it be sent to 5.4-rc, please?
mx53 boards cannot boot in mainline because of this.
Thanks
Powered by blists - more mailing lists