[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <faf15d58-a076-49fb-c903-15acdf6f8ffe@gmail.com>
Date: Wed, 19 Dec 2018 20:53:45 +0300
From: Dmitry Osipenko <digetx@...il.com>
To: Eric Anholt <eric@...olt.net>, dri-devel@...ts.freedesktop.org
Cc: linux-kernel@...r.kernel.org, Chunming Zhou <david1.zhou@....com>,
Christian König <christian.koenig@....com>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Jason Ekstrand <jason@...kstrand.net>
Subject: Re: [PATCH 2/2] drm: Revert syncobj timeline changes.
On 08.11.2018 19:04, Eric Anholt wrote:
> Daniel suggested I submit this, since we're still seeing regressions
> from it. This is a revert to before 48197bc564c7 ("drm: add syncobj
> timeline support v9") and its followon fixes.
>
> Fixes this on first V3D testcase execution:
>
> [ 48.767088] ============================================
> [ 48.772410] WARNING: possible recursive locking detected
> [ 48.777739] 4.19.0-rc6+ #489 Not tainted
> [ 48.781668] --------------------------------------------
> [ 48.786993] shader_runner/3284 is trying to acquire lock:
> [ 48.792408] ce309d7f (&(&array->lock)->rlock){....}, at: dma_fence_add_callback+0x30/0x23c
> [ 48.800714]
> [ 48.800714] but task is already holding lock:
> [ 48.806559] c5952bd3 (&(&array->lock)->rlock){....}, at: dma_fence_add_callback+0x30/0x23c
> [ 48.814862]
> [ 48.814862] other info that might help us debug this:
> [ 48.821410] Possible unsafe locking scenario:
> [ 48.821410]
> [ 48.827338] CPU0
> [ 48.829788] ----
> [ 48.832239] lock(&(&array->lock)->rlock);
> [ 48.836434] lock(&(&array->lock)->rlock);
> [ 48.840640]
> [ 48.840640] *** DEADLOCK ***
> [ 48.840640]
> [ 48.846582] May be due to missing lock nesting notation
> [ 130.763560] 1 lock held by cts-runner/3270:
> [ 130.767745] #0: 7834b793 (&(&array->lock)->rlock){-...}, at: dma_fence_add_callback+0x30/0x23c
> [ 130.776461]
> stack backtrace:
> [ 130.780825] CPU: 1 PID: 3270 Comm: cts-runner Not tainted 4.19.0-rc6+ #486
> [ 130.787706] Hardware name: Broadcom STB (Flattened Device Tree)
> [ 130.793645] [<c021269c>] (unwind_backtrace) from [<c020db1c>] (show_stack+0x10/0x14)
> [ 130.801404] [<c020db1c>] (show_stack) from [<c0c2c4b0>] (dump_stack+0xa8/0xd4)
> [ 130.808642] [<c0c2c4b0>] (dump_stack) from [<c0281a84>] (__lock_acquire+0x848/0x1a68)
> [ 130.816483] [<c0281a84>] (__lock_acquire) from [<c02835d8>] (lock_acquire+0xd8/0x22c)
> [ 130.824326] [<c02835d8>] (lock_acquire) from [<c0c49948>] (_raw_spin_lock_irqsave+0x54/0x68)
> [ 130.832777] [<c0c49948>] (_raw_spin_lock_irqsave) from [<c086bf54>] (dma_fence_add_callback+0x30/0x23c)
> [ 130.842183] [<c086bf54>] (dma_fence_add_callback) from [<c086d4c8>] (dma_fence_array_enable_signaling+0x58/0xec)
> [ 130.852371] [<c086d4c8>] (dma_fence_array_enable_signaling) from [<c086c00c>] (dma_fence_add_callback+0xe8/0x23c)
> [ 130.862647] [<c086c00c>] (dma_fence_add_callback) from [<c06d8774>] (drm_syncobj_wait_ioctl+0x518/0x614)
> [ 130.872143] [<c06d8774>] (drm_syncobj_wait_ioctl) from [<c06b8458>] (drm_ioctl_kernel+0xb0/0xf0)
> [ 130.880940] [<c06b8458>] (drm_ioctl_kernel) from [<c06b8818>] (drm_ioctl+0x1d8/0x390)
> [ 130.888782] [<c06b8818>] (drm_ioctl) from [<c03a4510>] (do_vfs_ioctl+0xb0/0x8ac)
> [ 130.896187] [<c03a4510>] (do_vfs_ioctl) from [<c03a4d40>] (ksys_ioctl+0x34/0x60)
> [ 130.903593] [<c03a4d40>] (ksys_ioctl) from [<c0201000>] (ret_fast_syscall+0x0/0x28)
>
> Cc: Chunming Zhou <david1.zhou@....com>
> Cc: Christian König <christian.koenig@....com>
> Cc: Daniel Vetter <daniel.vetter@...ll.ch>
> Signed-off-by: Eric Anholt <eric@...olt.net>
> ---
[snip]
> @@ -931,9 +718,6 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>
> if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
> for (i = 0; i < count; ++i) {
> - if (entries[i].fence)
> - continue;
> -
> drm_syncobj_fence_get_or_add_callback(syncobjs[i],
> &entries[i].fence,
> &entries[i].syncobj_cb,
Hello,
The above three removed lines we added in commit 337fe9f5c1e7de ("drm/syncobj: Don't leak fences when WAIT_FOR_SUBMIT is set") that fixed a memleak. Removal of the lines returns the memleak because of disbalanced fence refcounting and it looks like they were removed unintentionally in this patch.
--
Dmitry
Powered by blists - more mailing lists