[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190315090646.GC4470@infradead.org>
Date: Fri, 15 Mar 2019 02:06:46 -0700
From: Christoph Hellwig <hch@...radead.org>
To: John Stultz <john.stultz@...aro.org>
Cc: lkml <linux-kernel@...r.kernel.org>,
Laura Abbott <labbott@...hat.com>,
Benjamin Gaignard <benjamin.gaignard@...aro.org>,
Greg KH <gregkh@...uxfoundation.org>,
Sumit Semwal <sumit.semwal@...aro.org>,
Liam Mark <lmark@...eaurora.org>,
Brian Starkey <Brian.Starkey@....com>,
"Andrew F . Davis" <afd@...com>, Chenbo Feng <fengc@...gle.com>,
Alistair Strachan <astrachan@...gle.com>,
dri-devel@...ts.freedesktop.org
Subject: Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> + struct scatterlist *sg;
> + int i, j;
> + void *vaddr;
> + pgprot_t pgprot;
> + struct sg_table *table = buffer->sg_table;
> + int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
> + struct page **pages = vmalloc(array_size(npages,
> + sizeof(struct page *)));
> + struct page **tmp = pages;
> +
> + if (!pages)
> + return ERR_PTR(-ENOMEM);
> +
> + pgprot = PAGE_KERNEL;
> +
> + for_each_sg(table->sgl, sg, table->nents, i) {
> + int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
> + struct page *page = sg_page(sg);
> +
> + WARN_ON(i >= npages);
> + for (j = 0; j < npages_this_entry; j++)
> + *(tmp++) = page++;
This should probably use nth_page.
That being said I really wish we could have a more iterative version
of vmap, where the caller does a get_vm_area_caller and then adds
each chunk using another call, including the possibility of mapping
larger than PAGE_SIZE contigous ones. Any chance you could look into
that?
> + ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
> + vma->vm_page_prot);
So the same chunk could be mapped to userspace and vmap, and later on
also DMA mapped. Who is going to take care of cache aliasing as I
see nothing of that in this series?
> + if (buffer->kmap_cnt) {
> + buffer->kmap_cnt++;
> + return buffer->vaddr;
> + }
> + vaddr = dma_heap_map_kernel(buffer);
> + if (WARN_ONCE(!vaddr,
> + "heap->ops->map_kernel should return ERR_PTR on error"))
> + return ERR_PTR(-EINVAL);
> + if (IS_ERR(vaddr))
> + return vaddr;
> + buffer->vaddr = vaddr;
> + buffer->kmap_cnt++;
The cnt manioulation is odd. The normal way to make this readable
is to use a postfix op on the check, as that makes it clear to everyone,
e.g.:
if (buffer->kmap_cnt++)
return buffer->vaddr;
..
> + buffer->kmap_cnt--;
> + if (!buffer->kmap_cnt) {
> + vunmap(buffer->vaddr);
> + buffer->vaddr = NULL;
> + }
Same here, just with an infix.
> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> + void (*free)(struct heap_helper_buffer *))
> +{
> + buffer->private_flags = 0;
> + buffer->priv_virt = NULL;
> + mutex_init(&buffer->lock);
> + buffer->kmap_cnt = 0;
> + buffer->vaddr = NULL;
> + buffer->sg_table = NULL;
> + INIT_LIST_HEAD(&buffer->attachments);
> + buffer->free = free;
> +}
There is absolutely no reason to inlines this as far as I can tell.
Also it would seem much simpler to simply let the caller assign the
free callback.
Powered by blists - more mailing lists