[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250225031225.44102-1-liuqianyi125@gmail.com>
Date: Tue, 25 Feb 2025 11:12:25 +0800
From: Qianyi Liu <liuqianyi125@...il.com>
To: phasta@...lbox.org
Cc: airlied@...il.com,
ckoenig.leichtzumerken@...il.com,
dakr@...nel.org,
daniel@...ll.ch,
dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
liuqianyi125@...il.com,
maarten.lankhorst@...ux.intel.com,
matthew.brost@...el.com,
mripard@...nel.org,
phasta@...nel.org,
tzimmermann@...e.de
Subject: Re: [PATCH] drm/scheduler: Fix mem leak when last_scheduled signaled
Hello Philipp,
Thank you for your patient reply. Let's first clarify the issue and send a new
patch if necessary.
As soon as it enters the drm_sched_entity_kill function, the entity
->last_scheduled reference count is incremented by 1. If there are still jobs in
the current entity, it will enter the while loop, assuming there is only one job
left. If entity->last_scheduled has already been signaled, it will enter
drm_sched_entity_kill_jobs_cb, but because null is passed in, the
last_scheduled reference count will not be correctly reduced by 1.
Because the prev pointer has been updated to &s_fence->finished, the
dma_fence_put in the last line only reduces the reference count of s_fence->finished.
The reference count of entity->last_scheduled was not reduced by
1, causing a memory leak.
We should subtract 1 from the reference count of the prev when dma_fence_add_callback
fails, which is called balance.
Best Regards.
QianYi.
Powered by blists - more mailing lists