[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e59c6f8-bc9b-4fd5-9b0f-511cce760ac2@igalia.com>
Date: Thu, 4 Dec 2025 09:27:24 +0000
From: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
To: Chia-I Wu <olvaffe@...il.com>,
Boris Brezillon <boris.brezillon@...labora.com>,
Steven Price <steven.price@....com>, Liviu Dudau <liviu.dudau@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Grant Likely <grant.likely@...aro.org>, Heiko Stuebner <heiko@...ech.de>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] drm/panthor: fix for dma-fence safe access rules
On 04/12/2025 01:50, Chia-I Wu wrote:
> Commit 506aa8b02a8d6 ("dma-fence: Add safe access helpers and document
> the rules") details the dma-fence safe access rules. The most common
> culprit is that drm_sched_fence_get_timeline_name may race with
> group_free_queue.
>
> Fixes: d2624d90a0b77 ("drm/panthor: assign unique names to queues")
> Signed-off-by: Chia-I Wu <olvaffe@...il.com>
> ---
> drivers/gpu/drm/panthor/panthor_sched.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index 33b9ef537e359..a8b1347e4da71 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -23,6 +23,7 @@
> #include <linux/module.h>
> #include <linux/platform_device.h>
> #include <linux/pm_runtime.h>
> +#include <linux/rcupdate.h>
>
> #include "panthor_devfreq.h"
> #include "panthor_device.h"
> @@ -923,6 +924,9 @@ static void group_release_work(struct work_struct *work)
> release_work);
> u32 i;
>
> + /* dma-fences may still be accessing group->queues under rcu lock. */
> + synchronize_rcu();
> +
> for (i = 0; i < group->queue_count; i++)
> group_free_queue(group, group->queues[i]);
>
This handles the shared queue->fence_ctx.lock as well (which is also
unsafe until Christian lands the inline lock, etc patch series) so it
looks good to me as well.
Just to mention an alternative could be to simply switch release_work to
INIT_RCU_WORK/queue_rcu_work, but I am not sure if that has an advantage.
Regards,
Tvrtko
Powered by blists - more mailing lists