[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <29124381-6949-4828-9b57-dc2dc0f55107@igalia.com>
Date: Wed, 21 May 2025 11:24:13 +0100
From: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
To: Philipp Stanner <phasta@...nel.org>,
Matthew Brost <matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org
Subject: Re: [PATCH] drm/sched/tests: Use one lock for fence context
On 21/05/2025 11:04, Philipp Stanner wrote:
> When the unit tests were implemented, each scheduler job got its own,
> distinct lock. This is not how dma_fence context locking rules are to be
> implemented. All jobs belonging to the same fence context (in this case:
> scheduler) should share a lock for their dma_fences. This is to comply
> to various dma_fence rules, e.g., ensuring that only one fence gets
> signaled at a time.
>
> Use the fence context (scheduler) lock for the jobs.
I think for the mock scheduler it works to share the lock, but I don't
think see that the commit message is correct. Where do you see the
requirement to share the lock? AFAIK fence->lock is a fence lock,
nothing more semantically.
And what does "ensuring that only one fence gets signalled at a time"
mean? You mean signal in seqno order? Even that is not guaranteed in the
contract due opportunistic signalling.
Regards,
Tvrtko
> Signed-off-by: Philipp Stanner <phasta@...nel.org>
> ---
> drivers/gpu/drm/scheduler/tests/mock_scheduler.c | 5 ++---
> drivers/gpu/drm/scheduler/tests/sched_tests.h | 1 -
> 2 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> index f999c8859cf7..17023276f4b0 100644
> --- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> @@ -64,7 +64,7 @@ static void drm_mock_sched_job_complete(struct drm_mock_sched_job *job)
>
> job->flags |= DRM_MOCK_SCHED_JOB_DONE;
> list_move_tail(&job->link, &sched->done_list);
> - dma_fence_signal(&job->hw_fence);
> + dma_fence_signal_locked(&job->hw_fence);
> complete(&job->done);
> }
>
> @@ -123,7 +123,6 @@ drm_mock_sched_job_new(struct kunit *test,
> job->test = test;
>
> init_completion(&job->done);
> - spin_lock_init(&job->lock);
> INIT_LIST_HEAD(&job->link);
> hrtimer_setup(&job->timer, drm_mock_sched_job_signal_timer,
> CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
> @@ -169,7 +168,7 @@ static struct dma_fence *mock_sched_run_job(struct drm_sched_job *sched_job)
>
> dma_fence_init(&job->hw_fence,
> &drm_mock_sched_hw_fence_ops,
> - &job->lock,
> + &sched->lock,
> sched->hw_timeline.context,
> atomic_inc_return(&sched->hw_timeline.next_seqno));
>
> diff --git a/drivers/gpu/drm/scheduler/tests/sched_tests.h b/drivers/gpu/drm/scheduler/tests/sched_tests.h
> index 27caf8285fb7..fbba38137f0c 100644
> --- a/drivers/gpu/drm/scheduler/tests/sched_tests.h
> +++ b/drivers/gpu/drm/scheduler/tests/sched_tests.h
> @@ -106,7 +106,6 @@ struct drm_mock_sched_job {
> unsigned int duration_us;
> ktime_t finish_at;
>
> - spinlock_t lock;
> struct dma_fence hw_fence;
>
> struct kunit *test;
Powered by blists - more mailing lists