[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96665cc5-01ab-4446-af37-e0f456bfe093@amd.com>
Date: Tue, 5 Dec 2023 16:58:15 +0100
From: Christian König <christian.koenig@....com>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel@...ts.freedesktop.org,
Rob Clark <robdclark@...omium.org>,
Luben Tuikov <luben.tuikov@....com>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>
Subject: Re: [RFC] drm/scheduler: Unwrap job dependencies
Am 05.12.23 um 16:41 schrieb Rob Clark:
> On Mon, Dec 4, 2023 at 10:46 PM Christian König
> <christian.koenig@....com> wrote:
>> Am 04.12.23 um 22:54 schrieb Rob Clark:
>>> On Thu, Mar 23, 2023 at 2:30 PM Rob Clark <robdclark@...il.com> wrote:
>>>> [SNIP]
>>> So, this patch turns out to blow up spectacularly with dma_fence
>>> refcnt underflows when I enable DRIVER_SYNCOBJ_TIMELINE .. I think,
>>> because it starts unwrapping fence chains, possibly in parallel with
>>> fence signaling on the retire path. Is it supposed to be permissible
>>> to unwrap a fence chain concurrently?
>> The DMA-fence chain object and helper functions were designed so that
>> concurrent accesses to all elements are always possible.
>>
>> See dma_fence_chain_walk() and dma_fence_chain_get_prev() for example.
>> dma_fence_chain_walk() starts with a reference to the current fence (the
>> anchor of the walk) and tries to grab an up to date reference on the
>> previous fence in the chain. Only after that reference is successfully
>> acquired we drop the reference to the anchor where we started.
>>
>> Same for dma_fence_array_first(), dma_fence_array_next(). Here we hold a
>> reference to the array which in turn holds references to each fence
>> inside the array until it is destroyed itself.
>>
>> When this blows up we have somehow mixed up the references somewhere.
> That's what it looked like to me, but wanted to make sure I wasn't
> overlooking something subtle. And in this case, the fence actually
> should be the syncobj timeline point fence, not the fence chain.
> Virtgpu has essentially the same logic (there we really do want to
> unwrap fences so we can pass host fences back to host rather than
> waiting in guest), I'm not sure if it would blow up in the same way.
Well do you have a backtrace of what exactly happens?
Maybe we have some _put() before _get() or something like this.
Thanks,
Christian.
>
> BR,
> -R
>
>> Regards,
>> Christian.
>>
>>> BR,
>>> -R
Powered by blists - more mailing lists