[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZtdguY2gELaMWuxk@phenom.ffwll.local>
Date: Tue, 3 Sep 2024 21:17:13 +0200
From: Simona Vetter <simona.vetter@...ll.ch>
To: Mihail Atanassov <mihail.atanassov@....com>
Cc: linux-kernel@...r.kernel.org,
Boris Brezillon <boris.brezillon@...labora.com>,
Liviu Dudau <liviu.dudau@....com>,
Steven Price <steven.price@....com>,
dri-devel@...ts.freedesktop.org, Daniel Vetter <daniel@...ll.ch>,
David Airlie <airlied@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Alex Deucher <alexander.deucher@....com>,
Christian König <christian.koenig@....com>,
Xinhui Pan <Xinhui.Pan@....com>,
Shashank Sharma <shashank.sharma@....com>,
Ketil Johnsen <ketil.johnsen@....com>,
Akash Goel <akash.goel@....com>
Subject: Re: [PATCH 6/8] drm/panthor: Implement XGS queues
On Wed, Aug 28, 2024 at 06:26:02PM +0100, Mihail Atanassov wrote:
> +int panthor_xgs_queue_create(struct panthor_file *pfile, u32 vm_id,
> + int eventfd_sync_update, u32 *handle)
> +{
> + struct panthor_device *ptdev = pfile->ptdev;
> + struct panthor_xgs_queue_pool *xgs_queue_pool = pfile->xgs_queues;
> + struct panthor_xgs_queue *queue;
> + struct drm_gpu_scheduler *drm_sched;
> + int ret;
> + int qid;
> +
> + queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> + if (!queue)
> + return -ENOMEM;
> +
> + kref_init(&queue->refcount);
> + INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> + INIT_WORK(&queue->release_work, xgs_queue_release_work);
> + queue->ptdev = ptdev;
> +
> + ret = drmm_mutex_init(&ptdev->base, &queue->lock);
This is guaranteed buggy, because you kzalloc queue, with it's own
refcount, but then tie the mutex cleanup to the entirely different
lifetime of the drm_device.
Just spotted this while reading around.
-Sima
--
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists