[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69d66b9e-5810-4844-a53f-08b7fd8eeccf@amd.com>
Date: Tue, 5 Dec 2023 07:46:07 +0100
From: Christian König <christian.koenig@....com>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel@...ts.freedesktop.org,
Rob Clark <robdclark@...omium.org>,
Luben Tuikov <luben.tuikov@....com>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>
Subject: Re: [RFC] drm/scheduler: Unwrap job dependencies
Am 04.12.23 um 22:54 schrieb Rob Clark:
> On Thu, Mar 23, 2023 at 2:30 PM Rob Clark <robdclark@...il.com> wrote:
>> [SNIP]
> So, this patch turns out to blow up spectacularly with dma_fence
> refcnt underflows when I enable DRIVER_SYNCOBJ_TIMELINE .. I think,
> because it starts unwrapping fence chains, possibly in parallel with
> fence signaling on the retire path. Is it supposed to be permissible
> to unwrap a fence chain concurrently?
The DMA-fence chain object and helper functions were designed so that
concurrent accesses to all elements are always possible.
See dma_fence_chain_walk() and dma_fence_chain_get_prev() for example.
dma_fence_chain_walk() starts with a reference to the current fence (the
anchor of the walk) and tries to grab an up to date reference on the
previous fence in the chain. Only after that reference is successfully
acquired we drop the reference to the anchor where we started.
Same for dma_fence_array_first(), dma_fence_array_next(). Here we hold a
reference to the array which in turn holds references to each fence
inside the array until it is destroyed itself.
When this blows up we have somehow mixed up the references somewhere.
Regards,
Christian.
>
> BR,
> -R
Powered by blists - more mailing lists