[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1100bd6-dd98-55a9-a92f-1cad919f235f@amd.com>
Date: Fri, 20 Apr 2018 12:44:01 +0200
From: Christian König <christian.koenig@....com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jerome Glisse <jglisse@...hat.com>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
amd-gfx list <amd-gfx@...ts.freedesktop.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Logan Gunthorpe <logang@...tatee.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH 4/8] dma-buf: add peer2peer flag
Am 20.04.2018 um 12:17 schrieb Christoph Hellwig:
> On Fri, Apr 20, 2018 at 10:58:50AM +0200, Christian König wrote:
>>> Yes there's a bit a layering violation insofar that drivers really
>>> shouldn't each have their own copy of "how do I convert a piece of dma
>>> memory into dma-buf", but that doesn't render the interface a bad idea.
>> Completely agree on that.
>>
>> What we need is an sg_alloc_table_from_resources(dev, resources,
>> num_resources) which does the handling common to all drivers.
> A structure that contains
>
> {page,offset,len} + {dma_addr+dma_len}
>
> is not a good container for storing
>
> {virt addr, dma_addr, len}
>
> no matter what interface you build arond it.
Why not? I mean at least for my use case we actually don't need the
virtual address.
What we need is {dma_addr+dma_len} in a consistent interface which can
come from both {page,offset,len} as well as {resource, len}.
What I actually don't need is separate handling for system memory and
resources, but that would we get exactly when we don't use sg_table.
Christian.
> And that is discounting
> all the problems around mapping coherent allocations for other devices,
> or the iommu merging problem we are having another thread on.
>
> So let's come up with a better high level interface first, and then
> worrty about how to implement it in the low-level dma-mapping interface
> second. Especially given that my consolidation of the dma_map_ops
> implementation is in full stream and there shoudn't be all that many
> to bother with.
>
> So first question: Do you actually care about having multiple
> pairs of the above, or instead of all chunks just deal with a single
> of the above? In that case we really should not need that many
> new interfaces as dma_map_resource will be all you need anyway.
>
>> Christian.
>>
>>> -Daniel
> ---end quoted text---
Powered by blists - more mailing lists