[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4ef3a3b-eed1-4c15-827b-4a34a8a47dc1@ursulin.net>
Date: Thu, 20 Mar 2025 10:47:28 +0000
From: Tvrtko Ursulin <tursulin@...ulin.net>
To: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@....com>,
Matthew Brost <matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>,
Philipp Stanner <phasta@...nel.org>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH v8 05/10] drm/sched: trace dependencies for gpu jobs
On 20/03/2025 09:58, Pierre-Eric Pelloux-Prayer wrote:
> We can't trace dependencies from drm_sched_job_add_dependency
> because when it's called the job's fence is not available yet.
>
> So instead each dependency is traced individually when
> drm_sched_entity_push_job is used.
>
> Tracing the dependencies allows tools to analyze the dependencies
> between the jobs (previously it was only possible for fences
> traced by drm_sched_job_wait_dep).
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@....com>
> ---
> .../gpu/drm/scheduler/gpu_scheduler_trace.h | 24 ++++++++++++++++++-
> drivers/gpu/drm/scheduler/sched_entity.c | 8 +++++++
> 2 files changed, 31 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
> index 21a85ee59066..5d9992ad47d3 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
> @@ -54,7 +54,6 @@ DECLARE_EVENT_CLASS(drm_sched_job,
> __assign_str(dev);
> __entry->fence_context = sched_job->s_fence->finished.context;
> __entry->fence_seqno = sched_job->s_fence->finished.seqno;
> -
> ),
> TP_printk("dev=%s, id=%llu, fence=%llu:%llu, ring=%s, job count:%u, hw job count:%d",
> __get_str(dev), __entry->id,
> @@ -88,6 +87,29 @@ TRACE_EVENT(drm_sched_process_job,
> __entry->fence_context, __entry->fence_seqno)
> );
>
> +TRACE_EVENT(drm_sched_job_add_dep,
> + TP_PROTO(struct drm_sched_job *sched_job, struct dma_fence *fence),
> + TP_ARGS(sched_job, fence),
> + TP_STRUCT__entry(
> + __field(u64, fence_context)
> + __field(u64, fence_seqno)
> + __field(u64, id)
> + __field(u64, ctx)
> + __field(u64, seqno)
> + ),
> +
> + TP_fast_assign(
> + __entry->fence_context = sched_job->s_fence->finished.context;
> + __entry->fence_seqno = sched_job->s_fence->finished.seqno;
> + __entry->id = sched_job->id;
> + __entry->ctx = fence->context;
> + __entry->seqno = fence->seqno;
> + ),
> + TP_printk("fence=%llu:%llu, id=%llu depends on fence=%llu:%llu",
> + __entry->fence_context, __entry->fence_seqno, __entry->id,
> + __entry->ctx, __entry->seqno)
> +);
> +
> TRACE_EVENT(drm_sched_job_wait_dep,
> TP_PROTO(struct drm_sched_job *sched_job, struct dma_fence *fence),
> TP_ARGS(sched_job, fence),
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index a6d2a4722d82..047e42cfb129 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -580,6 +580,14 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
> ktime_t submit_ts;
>
> trace_drm_sched_job(sched_job, entity);
> +
> + if (trace_drm_sched_job_add_dep_enabled()) {
> + struct dma_fence *entry;
> + unsigned long index;
> +
> + xa_for_each(&sched_job->dependencies, index, entry)
> + trace_drm_sched_job_add_dep(sched_job, entry);
> + }
> atomic_inc(entity->rq->sched->score);
> WRITE_ONCE(entity->last_user, current->group_leader);
>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
Regards,
Tvrtko
Powered by blists - more mailing lists