[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <484582f2-b027-8406-b6ff-f50c0d1ba7b0@arm.com>
Date: Tue, 1 Sep 2020 18:24:17 +0100
From: Robin Murphy <robin.murphy@....com>
To: Marek Szyprowski <m.szyprowski@...sung.com>,
dri-devel@...ts.freedesktop.org, iommu@...ts.linux-foundation.org,
linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org
Cc: Christoph Hellwig <hch@....de>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
linux-arm-kernel@...ts.infradead.org,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>
Subject: Re: [PATCH v9 02/32] drm: prime: use sgtable iterators in
drm_prime_sg_to_page_addr_arrays()
On 2020-08-26 07:32, Marek Szyprowski wrote:
> Replace the current hand-crafted code for extracting pages and DMA
> addresses from the given scatterlist by the much more robust
> code based on the generic scatterlist iterators and recently
> introduced sg_table-based wrappers. The resulting code is simple and
> easy to understand, so the comment describing the old code is no
> longer needed.
Is removing the WARN_ON()s intentional? It certainly seems like it would
be a genuine driver bug if the caller asked for addresses but didn't
allocate appropriately-sized arrays. Might be worth noting either way.
I'm also assuming this isn't called in performance-critical paths with
massive lists such that the two separate iterations might have a
noticeable impact.
Nits aside,
Reviewed-by: Robin Murphy <robin.murphy@....com>
> Signed-off-by: Marek Szyprowski <m.szyprowski@...sung.com>
> Reviewed-by: Andrzej Hajda <a.hajda@...sung.com>
> ---
> drivers/gpu/drm/drm_prime.c | 49 ++++++++++++-------------------------
> 1 file changed, 15 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 4ed5ed1f078c..5d181bf60a44 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -990,45 +990,26 @@ EXPORT_SYMBOL(drm_gem_prime_import);
> int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> dma_addr_t *addrs, int max_entries)
> {
> - unsigned count;
> - struct scatterlist *sg;
> - struct page *page;
> - u32 page_len, page_index;
> - dma_addr_t addr;
> - u32 dma_len, dma_index;
> -
> - /*
> - * Scatterlist elements contains both pages and DMA addresses, but
> - * one shoud not assume 1:1 relation between them. The sg->length is
> - * the size of the physical memory chunk described by the sg->page,
> - * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> - * described by the sg_dma_address(sg).
> - */
> - page_index = 0;
> - dma_index = 0;
> - for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> - page_len = sg->length;
> - page = sg_page(sg);
> - dma_len = sg_dma_len(sg);
> - addr = sg_dma_address(sg);
> -
> - while (pages && page_len > 0) {
> - if (WARN_ON(page_index >= max_entries))
> + struct sg_dma_page_iter dma_iter;
> + struct sg_page_iter page_iter;
> + struct page **p = pages;
> + dma_addr_t *a = addrs;
> +
> + if (pages) {
> + for_each_sgtable_page(sgt, &page_iter, 0) {
> + if (p - pages >= max_entries)
> return -1;
> - pages[page_index] = page;
> - page++;
> - page_len -= PAGE_SIZE;
> - page_index++;
> + *p++ = sg_page_iter_page(&page_iter);
> }
> - while (addrs && dma_len > 0) {
> - if (WARN_ON(dma_index >= max_entries))
> + }
> + if (addrs) {
> + for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
> + if (a - addrs >= max_entries)
> return -1;
> - addrs[dma_index] = addr;
> - addr += PAGE_SIZE;
> - dma_len -= PAGE_SIZE;
> - dma_index++;
> + *a++ = sg_page_iter_dma_address(&dma_iter);
> }
> }
> +
> return 0;
> }
> EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
>
Powered by blists - more mailing lists