[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250124100607.473761a9@collabora.com>
Date: Fri, 24 Jan 2025 10:06:07 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Adrián Larumbe <adrian.larumbe@...labora.com>
Cc: David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard
<mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, Jonathan
Corbet <corbet@....net>, Steven Price <steven.price@....com>, Liviu Dudau
<liviu.dudau@....com>, kernel@...labora.com, Tvrtko Ursulin
<tursulin@...ulin.net>, Tvrtko Ursulin <tvrtko.ursulin@...lia.com>,
dri-devel@...ts.freedesktop.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v9 5/5] drm/panthor: Fix race condition when gathering
fdinfo group samples
On Thu, 23 Jan 2025 22:53:02 +0000
Adrián Larumbe <adrian.larumbe@...labora.com> wrote:
> Commit e16635d88fa0 ("drm/panthor: add DRM fdinfo support") failed to
> protect access to groups with an xarray lock, which could lead to
> use-after-free errors.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@...labora.com>
> Fixes: e16635d88fa0 ("drm/panthor: add DRM fdinfo support")
Nice catch!
Reviewed-by: Boris Brezillon <boris.brezillon@...labora.com>
> ---
> drivers/gpu/drm/panthor/panthor_sched.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index e6c08a694e41..1d283b4bab86 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -2865,6 +2865,7 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
> if (IS_ERR_OR_NULL(gpool))
> return;
>
> + xa_lock(&gpool->xa);
> xa_for_each(&gpool->xa, i, group) {
> mutex_lock(&group->fdinfo.lock);
> pfile->stats.cycles += group->fdinfo.data.cycles;
> @@ -2873,6 +2874,7 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
> group->fdinfo.data.time = 0;
> mutex_unlock(&group->fdinfo.lock);
> }
> + xa_unlock(&gpool->xa);
> }
>
> static void group_sync_upd_work(struct work_struct *work)
Powered by blists - more mailing lists