[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOFGe95Gx=kX=sxwhx1FYmXQuPtGAKwt2V5YodQBwJXujE3WwA@mail.gmail.com>
Date: Wed, 4 Mar 2020 10:27:51 -0600
From: Jason Ekstrand <jason@...kstrand.net>
To: Christian König <christian.koenig@....com>
Cc: Bas Nieuwenhuizen <bas@...nieuwenhuizen.nl>,
Dave Airlie <airlied@...hat.com>,
Jesse Hall <jessehall@...gle.com>,
James Jones <jajones@...dia.com>,
Daniel Stone <daniels@...labora.com>,
Kristian Høgsberg <hoegsberg@...gle.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Chenbo Feng <fengc@...gle.com>,
Greg Hackmann <ghackmann@...gle.com>,
linux-media@...r.kernel.org,
Maling list - DRI developers
<dri-devel@...ts.freedesktop.org>, linaro-mm-sig@...ts.linaro.org,
LKML <linux-kernel@...r.kernel.org>,
Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH] RFC: dma-buf: Add an API for importing and exporting sync files
On Wed, Mar 4, 2020 at 2:34 AM Christian König <christian.koenig@....com> wrote:
>
> Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
> > On Thu, Feb 27, 2020 at 2:28 AM Christian König
> > <christian.koenig@....com> wrote:
> >> [SNIP]
> >>> However, I'm not sure what the best way is to do garbage collection on
> >>> that so that we don't get an impossibly list of fence arrays.
> >> Exactly yes. That's also the reason why the dma_fence_chain container I
> >> came up with for the sync timeline stuff has such a rather sophisticated
> >> garbage collection.
> >>
> >> When some of the included fences signal you need to free up the
> >> array/chain and make sure that the memory for the container can be reused.
> > Currently (as of v2), I'm using dma_fence_array and being careful to
> > not bother constructing one if there's only one fence in play. Is
> > this insufficient? If so, maybe we should consider improving
> > dma_fence_array.
>
> That still won't work correctly in all cases. See the problem is not
> only optimization, but also avoiding situations where userspace can
> abuse the interface to do nasty things.
>
> For example if userspace just calls that function in a loop you can
> create a long chain of dma_fence_array objects.
>
> If that chain is then suddenly released the recursive dropping of
> references can overwrite the kernel stack.
>
> For reference see what dance is necessary in the dma_fence_chain_release
> function to avoid that:
> > /* Manually unlink the chain as much as possible to avoid
> > recursion
> > * and potential stack overflow.
> > */
> > while ((prev = rcu_dereference_protected(chain->prev, true))) {
> ....
>
> It took me quite a while to figure out how to do this without causing
> issues. But I don't see how this would be possible for dma_fence_array.
Ah, I see the issue now! It hadn't even occurred to me that userspace
could use this to build up an infinite recursion chain. That's nasty!
I'll give this some more thought and see if can come up with
something clever.
Here's one thought: We could make dma_fence_array automatically
collapse any arrays it references and instead directly reference their
fences. This way, no matter how much the client chains things, they
will never get more than one dma_fence_array. Of course, the
difficulty here (answering my own question) comes if they ping-pong
back-and-forth between something which constructs a dma_fence_array
and something which constructs a dma_fence_chain to get
array-of-chain-of-array-of-chain-of-... More thought needed.
> As far as I can see the only real option to implement this would be to
> change the dma_resv object container so that you can add fences without
> overriding existing ones.
>
> For shared fences that can be done relative easily, but I absolutely
> don't see how to do this for exclusive ones without a larger rework.
Fair enough. Thanks for taking the time to explain the issue. I'll
give this some more thought.
--Jason
> >>> (Note
> >>> the dma_resv has a lock that needs to be taken before adding an
> >>> exclusive fence, might be useful). Some code that does a thing like
> >>> this is __dma_resv_make_exclusive in
> >>> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> >> Wanted to move that into dma_resv.c for quite a while since there are
> >> quite a few other cases where we need this.
> > I've roughly done that. The primary difference is that my version
> > takes an optional additional fence to add to the array. This makes it
> > a bit more complicated but I think I got it mostly right.
> >
> > I've also written userspace code which exercises this and it seems to
> > work. Hopefully, that will give a better idea of what I'm trying to
> > accomplish.
>
> Yes, that is indeed a really nice to have feature.
>
> Regards,
> Christian.
Powered by blists - more mailing lists