[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200619113934.GN6578@ziepe.ca>
Date: Fri, 19 Jun 2020 08:39:34 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Daniel Vetter <daniel@...ll.ch>
Cc: Thomas Hellström (Intel)
<thomas_os@...pmail.org>,
DRI Development <dri-devel@...ts.freedesktop.org>,
linux-rdma <linux-rdma@...r.kernel.org>,
Intel Graphics Development <intel-gfx@...ts.freedesktop.org>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
amd-gfx list <amd-gfx@...ts.freedesktop.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>,
Thomas Hellstrom <thomas.hellstrom@...el.com>,
Daniel Vetter <daniel.vetter@...el.com>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>,
Christian König <christian.koenig@....com>,
Mika Kuoppala <mika.kuoppala@...el.com>
Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep
annotations
On Fri, Jun 19, 2020 at 09:22:09AM +0200, Daniel Vetter wrote:
> > As I've understood GPU that means you need to show that the commands
> > associated with the buffer have completed. This is all local stuff
> > within the driver, right? Why use fence (other than it already exists)
>
> Because that's the end-of-dma thing. And it's cross-driver for the
> above reasons, e.g.
> - device A renders some stuff. Userspace gets dma_fence A out of that
> (well sync_file or one of the other uapi interfaces, but you get the
> idea)
> - userspace (across process or just different driver) issues more
> rendering for device B, which depends upon the rendering done on
> device A. So dma_fence A is an dependency and will block this dma
> operation. Userspace (and the kernel) gets dma_fence B out of this
> - because unfortunate reasons, the same rendering on device B also
> needs a userptr buffer, which means that dma_fence B is also the one
> that the mmu_range_notifier needs to wait on before it can tell core
> mm that it can go ahead and release those pages
I was afraid you'd say this - this is complete madness for other DMA
devices to borrow the notifier hook of the first device!
What if the first device is a page faulting device and doesn't call
dma_fence??
It you are going to treat things this way then the mmu notifier really
needs to be part of the some core DMA buf, and not randomly sprinkled
in drivers
But really this is what page pinning is supposed to be used for, the
MM behavior when it blocks on a pinned page is less invasive than if
it stalls inside a mmu notifier.
You can mix it, use mmu notififers to keep track if the buffer is
still live, but when you want to trigger DMA then pin the pages and
keep them pinned until DMA is done. The pin protects things (well,
fork is still a problem)
Do not need to wait on dma_fence in notifiers.
Jason
Powered by blists - more mailing lists