lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a1c9c680-2927-428c-95e9-2e79d14cec58@igalia.com>
Date: Thu, 8 May 2025 13:51:42 +0100
From: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
To: phasta@...nel.org, Lyude Paul <lyude@...hat.com>,
 Danilo Krummrich <dakr@...nel.org>, David Airlie <airlied@...il.com>,
 Simona Vetter <simona@...ll.ch>, Matthew Brost <matthew.brost@...el.com>,
 Christian König <ckoenig.leichtzumerken@...il.com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
 Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>
Cc: dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 6/6] drm/sched: Port unit tests to new cleanup design


Hi Philipp,

On 08/05/2025 12:03, Philipp Stanner wrote:
> On Thu, 2025-04-24 at 11:55 +0200, Philipp Stanner wrote:
>> The unit tests so far took care manually of avoiding memory leaks
>> that
>> might have occurred when calling drm_sched_fini().
>>
>> The scheduler now takes care by itself of avoiding memory leaks if
>> the
>> driver provides the callback
>> drm_sched_backend_ops.kill_fence_context().
>>
>> Implement that callback for the unit tests. Remove the manual cleanup
>> code.
> 
> @Tvrtko: On a scale from 1-10, how much do you love this patch? :)

Specific patch aside, it is the series as a whole I would like to be 
sure there isn't a more elegant way to achieve the same end result.

Like that sketch of a counter proposal I sent for the reasons listed 
with it. Which were, AFAIR, to avoid needing to add more state machine, 
to avoid mandating drivers have to keep an internal list, and to align 
better with the existing prototypes in the sched ops table (where 
everything operates on jobs).

Regards,

Tvrtko

>> Signed-off-by: Philipp Stanner <phasta@...nel.org>
>> ---
>>   .../gpu/drm/scheduler/tests/mock_scheduler.c  | 34 ++++++++++++-----
>> --
>>   1 file changed, 21 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
>> b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
>> index f999c8859cf7..a72d26ca8262 100644
>> --- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
>> +++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
>> @@ -228,10 +228,30 @@ static void mock_sched_free_job(struct
>> drm_sched_job *sched_job)
>>   	/* Mock job itself is freed by the kunit framework. */
>>   }
>>   
>> +static void mock_sched_fence_context_kill(struct drm_gpu_scheduler
>> *gpu_sched)
>> +{
>> +	struct drm_mock_scheduler *sched =
>> drm_sched_to_mock_sched(gpu_sched);
>> +	struct drm_mock_sched_job *job;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&sched->lock, flags);
>> +	list_for_each_entry(job, &sched->job_list, link) {
>> +		spin_lock(&job->lock);
>> +		if (!dma_fence_is_signaled_locked(&job->hw_fence)) {
>> +			dma_fence_set_error(&job->hw_fence, -
>> ECANCELED);
>> +			dma_fence_signal_locked(&job->hw_fence);
>> +		}
>> +		complete(&job->done);
>> +		spin_unlock(&job->lock);
>> +	}
>> +	spin_unlock_irqrestore(&sched->lock, flags);
>> +}
>> +
>>   static const struct drm_sched_backend_ops drm_mock_scheduler_ops = {
>>   	.run_job = mock_sched_run_job,
>>   	.timedout_job = mock_sched_timedout_job,
>> -	.free_job = mock_sched_free_job
>> +	.free_job = mock_sched_free_job,
>> +	.kill_fence_context = mock_sched_fence_context_kill,
>>   };
>>   
>>   /**
>> @@ -300,18 +320,6 @@ void drm_mock_sched_fini(struct
>> drm_mock_scheduler *sched)
>>   		drm_mock_sched_job_complete(job);
>>   	spin_unlock_irqrestore(&sched->lock, flags);
>>   
>> -	/*
>> -	 * Free completed jobs and jobs not yet processed by the DRM
>> scheduler
>> -	 * free worker.
>> -	 */
>> -	spin_lock_irqsave(&sched->lock, flags);
>> -	list_for_each_entry_safe(job, next, &sched->done_list, link)
>> -		list_move_tail(&job->link, &list);
>> -	spin_unlock_irqrestore(&sched->lock, flags);
>> -
>> -	list_for_each_entry_safe(job, next, &list, link)
>> -		mock_sched_free_job(&job->base);
>> -
>>   	drm_sched_fini(&sched->base);
>>   }
>>   
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ