[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3748173-2bbc-43fa-b62e-72e778999764@amd.com>
Date: Wed, 8 Jan 2025 14:44:26 +0100
From: Christian König <christian.koenig@....com>
To: Jason Gunthorpe <jgg@...dia.com>, Christoph Hellwig <hch@....de>,
Leon Romanovsky <leonro@...dia.com>
Cc: Xu Yilun <yilun.xu@...ux.intel.com>, kvm@...r.kernel.org,
dri-devel@...ts.freedesktop.org, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org, sumit.semwal@...aro.org,
pbonzini@...hat.com, seanjc@...gle.com, alex.williamson@...hat.com,
vivek.kasireddy@...el.com, dan.j.williams@...el.com, aik@....com,
yilun.xu@...el.com, linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org, lukas@...ner.de, yan.y.zhao@...el.com,
daniel.vetter@...ll.ch, leon@...nel.org, baolu.lu@...ux.intel.com,
zhenzhong.duan@...el.com, tao1.su@...el.com
Subject: Re: [RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked()
kAPI
Am 08.01.25 um 14:23 schrieb Jason Gunthorpe:
> On Wed, Jan 08, 2025 at 09:01:46AM +0100, Christian König wrote:
>> Am 07.01.25 um 15:27 schrieb Xu Yilun:
>>> Introduce a new API for dma-buf importer, also add a dma_buf_ops
>>> callback for dma-buf exporter. This API is for subsystem importers who
>>> map the dma-buf to some user defined address space, e.g. for IOMMUFD to
>>> map the dma-buf to userspace IOVA via IOMMU page table, or for KVM to
>>> map the dma-buf to GPA via KVM MMU (e.g. EPT).
>>>
>>> Currently dma-buf is only used to get DMA address for device's default
>>> domain by using kernel DMA APIs. But for these new use-cases, importers
>>> only need the pfn of the dma-buf resource to build their own mapping
>>> tables.
>> As far as I can see I have to fundamentally reject this whole approach.
>>
>> It's intentional DMA-buf design that we don't expose struct pages nor PFNs
>> to the importer. Essentially DMA-buf only transports DMA addresses.
>>
>> In other words the mapping is done by the exporter and *not* the importer.
>>
>> What we certainly can do is to annotate those DMA addresses to a better
>> specify in which domain they are applicable, e.g. if they are PCIe bus
>> addresses or some inter device bus addresses etc...
>>
>> But moving the functionality to map the pages/PFNs to DMA addresses into the
>> importer is an absolutely clear NO-GO.
> Oh?
>
> Having the importer do the mapping is the correct way to operate the
> DMA API and the new API that Leon has built to fix the scatterlist
> abuse in dmabuf relies on importer mapping as part of it's
> construction.
Exactly on that I strongly disagree on.
DMA-buf works by providing DMA addresses the importer can work with and
*NOT* the underlying location of the buffer.
> Why on earth do you want the exporter to map?
Because the exporter owns the exported buffer and only the exporter
knows to how correctly access it.
> That is completely backwards and unworkable in many cases. The disfunctional P2P support
> in dmabuf is like that principally because of this.
No, that is exactly what we need.
Using the scatterlist to transport the DMA addresses was clearly a
mistake, but providing the DMA addresses by the exporter has proved many
times to be the right approach.
Keep in mind that the exported buffer is not necessary memory, but can
also be MMIO or stuff which is only accessible through address space
windows where you can't create a PFN nor struct page for.
> That said, I don't think get_pfn() is an especially good interface,
> but we will need to come with something that passes the physical pfn
> out.
No, physical pfn is absolutely not a good way of passing the location of
data around because it is limited to what the CPU sees as address space.
We have use cases where DMA-buf transports the location of CPU invisible
data which only the involved devices can see.
Regards,
Christian.
>
> Jason
Powered by blists - more mailing lists