[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2336a1282aa6a44f23a9100d2553b8032f44f3bd.camel@redhat.com>
Date: Mon, 27 Oct 2025 12:03:33 +0100
From: Philipp Stanner <pstanner@...hat.com>
To: Matthew Brost <matthew.brost@...el.com>, intel-xe@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Cc: jiangshanlai@...il.com, tj@...nel.org, simona.vetter@...ll.ch,
christian.koenig@....com, dakr@...nel.org
Subject: Re: [RFC PATCH 2/3] drm/sched: Taint workqueues with reclaim
On Tue, 2025-10-21 at 14:39 -0700, Matthew Brost wrote:
> Multiple drivers seemingly do not understand the role of DMA fences in
> the reclaim path. As a result,
>
result of what? The "role of DMA fences"?
> DRM scheduler workqueues, which are part
> of the fence signaling path, must not allocate memory.
>
Should be phrased differently. The actual rule here is "The GPU
scheduler's workqueues can be used for memory reclaim. Because of that,
work items on these queues must not allocate memory."
--
In general, I often read in commits or discussions about this or that
"rule", especially "DMA fence rules", but they're often not detailed
very much.
P.
> This patch
> teaches lockdep to recognize these rules in order to catch driver-side
> bugs.
>
> Cc: Christian König <christian.koenig@....com>
> Cc: Danilo Krummrich <dakr@...nel.org>
> Cc: Matthew Brost <matthew.brost@...el.com>
> Cc: Philipp Stanner <phasta@...nel.org>
> Cc: dri-devel@...ts.freedesktop.org
> Signed-off-by: Matthew Brost <matthew.brost@...el.com>
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index c39f0245e3a9..676484dd3ea3 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1368,6 +1368,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_
> atomic64_set(&sched->job_id_count, 0);
> sched->pause_submit = false;
>
> + taint_reclaim_workqueue(sched->submit_wq, GFP_KERNEL);
> + taint_reclaim_workqueue(sched->timeout_wq, GFP_KERNEL);
> +
> sched->ready = true;
> return 0;
> Out_unroll:
Powered by blists - more mailing lists