[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250115141458.GP5556@nvidia.com>
Date: Wed, 15 Jan 2025 10:14:58 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Christian König <christian.koenig@....com>
Cc: Xu Yilun <yilun.xu@...ux.intel.com>, Christoph Hellwig <hch@....de>,
Leon Romanovsky <leonro@...dia.com>, kvm@...r.kernel.org,
dri-devel@...ts.freedesktop.org, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org, sumit.semwal@...aro.org,
pbonzini@...hat.com, seanjc@...gle.com, alex.williamson@...hat.com,
vivek.kasireddy@...el.com, dan.j.williams@...el.com, aik@....com,
yilun.xu@...el.com, linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org, lukas@...ner.de, yan.y.zhao@...el.com,
leon@...nel.org, baolu.lu@...ux.intel.com, zhenzhong.duan@...el.com,
tao1.su@...el.com
Subject: Re: [RFC PATCH 01/12] dma-buf: Introduce dma_buf_get_pfn_unlocked()
kAPI
On Wed, Jan 15, 2025 at 02:46:56PM +0100, Christian König wrote:
> Explicitly replying as text mail once more.
>
> I just love the AMD mails servers :(
:( This is hard
> > Yeah, but it's private to the exporter. And a very fundamental rule of
> > DMA-buf is that the exporter is the one in control of things.
I've said a few times now, I don't think we can build the kind of
buffer sharing framework we need to solve all the problems with this
philosophy. It is also inefficient with the new DMA API.
I think it is backwards looking and we need to move forwards with
fixing the fundamental API issues which motivated that design.
> > So for example it is illegal for an importer to setup CPU mappings to a
> > buffer. That's why we have dma_buf_mmap() which redirects mmap()
> > requests from the importer to the exporter.
Like this, in a future no-scatter list world I would want to make this
safe. The importer will have enough information to know if CPU
mappings exist and are safe to use under what conditions.
There is no reason the importer should not be able to CPU access
memory that is HW permitted to be CPU accessible.
If the importer needs CPU access and the exporter cannot provide it
then the attachment simply fails.
Saying CPU access is banned 100% of the time is not a helpful position
when we have use cases that need it.
> > As far as I can see that is really not an use case which fits DMA-buf in
> > any way.
I really don't want to make a dmabuf2 - everyone would have to
implement it, including all the GPU drivers if they want to work with
RDMA. I don't think this makes any sense compared to incrementally
evolving dmabuf with more optional capabilities.
> > > > That sounds more something for the TEE driver instead of anything DMA-buf
> > > > should be dealing with.
> > > Has nothing to do with TEE.
> >
> > Why?
The Linux TEE framework is not used as part of confidential compute.
CC already has guest memfd for holding it's private CPU memory.
This is about confidential MMIO memory.
This is also not just about the KVM side, the VM side also has issues
with DMABUF and CC - only co-operating devices can interact with the
VM side "encrypted" memory and there needs to be a negotiation as part
of all buffer setup what the mutual capability is. :\ swiotlb hides
some of this some times, but confidential P2P is currently unsolved.
Jason
Powered by blists - more mailing lists