[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAFQd5Agy-hHsPPweMq-EvpgvnUrSFyG-KTm6HfmL=ac738xFw@mail.gmail.com>
Date: Wed, 28 Nov 2018 12:09:03 +0900
From: Tomasz Figa <tfiga@...omium.org>
To: Vivek Gautam <vivek.gautam@...eaurora.org>
Cc: David Airlie <airlied@...ux.ie>, Rob Clark <robdclark@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
freedreno <freedreno@...ts.freedesktop.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
jcrouse@...eaurora.org, Sean Paul <seanpaul@...omium.org>
Subject: Re: [PATCH v2 1/1] drm: msm: Replace dma_map_sg with dma_sync_sg*
Hi Vivek,
On Tue, Nov 27, 2018 at 6:37 AM Vivek Gautam
<vivek.gautam@...eaurora.org> wrote:
>
> dma_map_sg() expects a DMA domain. However, the drm devices
> have been traditionally using unmanaged iommu domain which
> is non-dma type. Using dma mapping APIs with that domain is bad.
>
> Replace dma_map_sg() calls with dma_sync_sg_for_device{|cpu}()
> to do the cache maintenance.
>
> Signed-off-by: Vivek Gautam <vivek.gautam@...eaurora.org>
> Suggested-by: Tomasz Figa <tfiga@...omium.org>
> Cc: Rob Clark <robdclark@...il.com>
> Cc: Jordan Crouse <jcrouse@...eaurora.org>
> Cc: Sean Paul <seanpaul@...omium.org>
> ---
>
> Changes since v1:
> - Addressed Jordan's and Tomasz's comments for
> - moving sg dma addresses preparation out of the coditional
> check to the main path so we do it irrespective of whether
> the buffer is cached or uncached.
> - Enhance the comment to explain this dma addresses prepartion.
>
Thanks for the patch. Some comments inline.
> drivers/gpu/drm/msm/msm_gem.c | 31 ++++++++++++++++++++++---------
> 1 file changed, 22 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 00c795ced02c..1811ac23a31c 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -81,6 +81,8 @@ static struct page **get_pages(struct drm_gem_object *obj)
> struct drm_device *dev = obj->dev;
> struct page **p;
> int npages = obj->size >> PAGE_SHIFT;
> + struct scatterlist *s;
> + int i;
>
> if (use_pages(obj))
> p = drm_gem_get_pages(obj);
> @@ -104,12 +106,21 @@ static struct page **get_pages(struct drm_gem_object *obj)
> return ptr;
> }
>
> - /* For non-cached buffers, ensure the new pages are clean
> + /*
> + * dma_sync_sg_*() flush the physical pages, so point
> + * sg->dma_address to the physical ones for the right behavior.
The two halves of the sequence don't really relate to each other. An
sglist has the `page` field for the purpose of pointing to physical
pages. The `dma_address` field is for DMA addresses, which are not
equivalent to physical addresses. I'd rewrite it like this;
/*
* Some implementations of the DMA mapping ops expect
* physical addresses of the pages to be stored as DMA
* addresses of the sglist entries. To work around it,
* set them here explicitly.
*/
> + */
> + for_each_sg(msm_obj->sgt->sgl, s, msm_obj->sgt->nents, i)
> + sg_dma_address(s) = sg_phys(s);
> +
> + /*
> + * For non-cached buffers, ensure the new pages are clean
> * because display controller, GPU, etc. are not coherent:
> */
> - if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
> - dma_map_sg(dev->dev, msm_obj->sgt->sgl,
> - msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
> + if (msm_obj->flags & (MSM_BO_WC | MSM_BO_UNCACHED))
> + dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl,
> + msm_obj->sgt->nents,
> + DMA_TO_DEVICE);
Why changing from DMA_BIDIRECTIONAL?
> }
>
> return msm_obj->pages;
> @@ -133,14 +144,16 @@ static void put_pages(struct drm_gem_object *obj)
>
> if (msm_obj->pages) {
> if (msm_obj->sgt) {
> - /* For non-cached buffers, ensure the new
> + /*
> + * For non-cached buffers, ensure the new
> * pages are clean because display controller,
> * GPU, etc. are not coherent:
> */
> - if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
> - dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
> - msm_obj->sgt->nents,
> - DMA_BIDIRECTIONAL);
> + if (msm_obj->flags & (MSM_BO_WC | MSM_BO_UNCACHED))
> + dma_sync_sg_for_cpu(obj->dev->dev,
> + msm_obj->sgt->sgl,
> + msm_obj->sgt->nents,
> + DMA_FROM_DEVICE);
Ditto.
Best regards,
Tomasz
Powered by blists - more mailing lists