[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99f279c5-e93d-ade6-cd97-56b3078da755@arm.com>
Date: Mon, 14 Oct 2019 16:46:37 +0100
From: Robin Murphy <robin.murphy@....com>
To: Steven Price <steven.price@....com>,
Daniel Vetter <daniel@...ll.ch>,
David Airlie <airlied@...ux.ie>, Rob Herring <robh@...nel.org>,
Tomeu Vizoso <tomeu.vizoso@...labora.com>
Cc: linux-kernel@...r.kernel.org,
Alyssa Rosenzweig <alyssa.rosenzweig@...labora.com>,
dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH] drm/panfrost: DMA map all pages shared with the GPU
On 14/10/2019 16:16, Steven Price wrote:
> Pages shared with the GPU are (often) not cache coherent with the CPU so
> cache maintenance is required to flush the CPU's caches. This was
> already done when mapping pages on fault, but wasn't previously done
> when mapping a freshly allocated page.
>
> Fix this by moving the call to dma_map_sg() into mmu_map_sg() ensuring
> that it is always called when pages are mapped onto the GPU. Since
> mmu_map_sg() can now fail the code also now has to handle an error
> return.
>
> Not performing this cache maintenance can cause errors in the GPU output
> (CPU caches are later flushed and overwrite the GPU output). In theory
> it also allows the GPU (and by extension user space) to observe the
> memory contents prior to sanitization.
For the non-heap case, aren't the pages already supposed to be mapped by
drm_gem_shmem_get_pages_sgt()?
(Hmm, maybe I should try hooking up the GPU SMMU on my Juno to serve as
a cheeky DMA-API-mishap detector...)
Robin.
> Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver")
> Signed-off-by: Steven Price <steven.price@....com>
> ---
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index bdd990568476..0495e48c238d 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -248,6 +248,9 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
> struct io_pgtable_ops *ops = mmu->pgtbl_ops;
> u64 start_iova = iova;
>
> + if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL))
> + return -EINVAL;
> +
> for_each_sg(sgt->sgl, sgl, sgt->nents, count) {
> unsigned long paddr = sg_dma_address(sgl);
> size_t len = sg_dma_len(sgl);
> @@ -275,6 +278,7 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo)
> struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
> struct sg_table *sgt;
> int prot = IOMMU_READ | IOMMU_WRITE;
> + int ret;
>
> if (WARN_ON(bo->is_mapped))
> return 0;
> @@ -286,10 +290,12 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo)
> if (WARN_ON(IS_ERR(sgt)))
> return PTR_ERR(sgt);
>
> - mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, prot, sgt);
> - bo->is_mapped = true;
> + ret = mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT,
> + prot, sgt);
> + if (ret == 0)
> + bo->is_mapped = true;
>
> - return 0;
> + return ret;
> }
>
> void panfrost_mmu_unmap(struct panfrost_gem_object *bo)
> @@ -503,12 +509,10 @@ int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr)
> if (ret)
> goto err_pages;
>
> - if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) {
> - ret = -EINVAL;
> + ret = mmu_map_sg(pfdev, bo->mmu, addr,
> + IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
> + if (ret)
> goto err_map;
> - }
> -
> - mmu_map_sg(pfdev, bo->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
>
> bo->is_mapped = true;
>
>
Powered by blists - more mailing lists