[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190722194002.GI104440@art_vandelay>
Date: Mon, 22 Jul 2019 15:40:02 -0400
From: Sean Paul <sean@...rly.run>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel@...ts.freedesktop.org,
Rob Clark <robdclark@...omium.org>,
Stephen Boyd <sboyd@...nel.org>, Sean Paul <sean@...rly.run>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] drm/msm: stop abusing dma_map/unmap for cache
On Sun, Jun 30, 2019 at 05:47:22AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@...omium.org>
>
> Recently splats like this started showing up:
>
> WARNING: CPU: 4 PID: 251 at drivers/iommu/dma-iommu.c:451 __iommu_dma_unmap+0xb8/0xc0
> Modules linked in: ath10k_snoc ath10k_core fuse msm ath mac80211 uvcvideo cfg80211 videobuf2_vmalloc videobuf2_memops vide
> CPU: 4 PID: 251 Comm: kworker/u16:4 Tainted: G W 5.2.0-rc5-next-20190619+ #2317
> Hardware name: LENOVO 81JL/LNVNB161216, BIOS 9UCN23WW(V1.06) 10/25/2018
> Workqueue: msm msm_gem_free_work [msm]
> pstate: 80c00005 (Nzcv daif +PAN +UAO)
> pc : __iommu_dma_unmap+0xb8/0xc0
> lr : __iommu_dma_unmap+0x54/0xc0
> sp : ffff0000119abce0
> x29: ffff0000119abce0 x28: 0000000000000000
> x27: ffff8001f9946648 x26: ffff8001ec271068
> x25: 0000000000000000 x24: ffff8001ea3580a8
> x23: ffff8001f95ba010 x22: ffff80018e83ba88
> x21: ffff8001e548f000 x20: fffffffffffff000
> x19: 0000000000001000 x18: 00000000c00001fe
> x17: 0000000000000000 x16: 0000000000000000
> x15: ffff000015b70068 x14: 0000000000000005
> x13: 0003142cc1be1768 x12: 0000000000000001
> x11: ffff8001f6de9100 x10: 0000000000000009
> x9 : ffff000015b78000 x8 : 0000000000000000
> x7 : 0000000000000001 x6 : fffffffffffff000
> x5 : 0000000000000fff x4 : ffff00001065dbc8
> x3 : 000000000000000d x2 : 0000000000001000
> x1 : fffffffffffff000 x0 : 0000000000000000
> Call trace:
> __iommu_dma_unmap+0xb8/0xc0
> iommu_dma_unmap_sg+0x98/0xb8
> put_pages+0x5c/0xf0 [msm]
> msm_gem_free_work+0x10c/0x150 [msm]
> process_one_work+0x1e0/0x330
> worker_thread+0x40/0x438
> kthread+0x12c/0x130
> ret_from_fork+0x10/0x18
> ---[ end trace afc0dc5ab81a06bf ]---
>
> Not quite sure what triggered that, but we really shouldn't be abusing
> dma_{map,unmap}_sg() for cache maint.
>
> Signed-off-by: Rob Clark <robdclark@...omium.org>
> Cc: Stephen Boyd <sboyd@...nel.org>
Applied to -misc-fixes
Thanks,
Sean
> ---
> drivers/gpu/drm/msm/msm_gem.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index d31d9f927887..3b84cbdcafa3 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -108,7 +108,7 @@ static struct page **get_pages(struct drm_gem_object *obj)
> * because display controller, GPU, etc. are not coherent:
> */
> if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
> - dma_map_sg(dev->dev, msm_obj->sgt->sgl,
> + dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl,
> msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
> }
>
> @@ -138,7 +138,7 @@ static void put_pages(struct drm_gem_object *obj)
> * GPU, etc. are not coherent:
> */
> if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
> - dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
> + dma_sync_sg_for_cpu(obj->dev->dev, msm_obj->sgt->sgl,
> msm_obj->sgt->nents,
> DMA_BIDIRECTIONAL);
>
> --
> 2.20.1
>
--
Sean Paul, Software Engineer, Google / Chromium OS
Powered by blists - more mailing lists