[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ddaf4984-6f5a-404c-df9d-537245e99420@arm.com>
Date: Wed, 19 Apr 2023 10:39:31 +0100
From: Steven Price <steven.price@....com>
To: Danilo Krummrich <dakr@...hat.com>, luben.tuikov@....com,
airlied@...il.com, daniel@...ll.ch, l.stach@...gutronix.de,
christian.koenig@....com
Cc: linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2] drm/scheduler: set entity to NULL in
drm_sched_entity_pop_job()
On 18/04/2023 11:04, Danilo Krummrich wrote:
> It already happend a few times that patches slipped through which
> implemented access to an entity through a job that was already removed
> from the entities queue. Since jobs and entities might have different
> lifecycles, this can potentially cause UAF bugs.
>
> In order to make it obvious that a jobs entity pointer shouldn't be
> accessed after drm_sched_entity_pop_job() was called successfully, set
> the jobs entity pointer to NULL once the job is removed from the entity
> queue.
>
> Moreover, debugging a potential NULL pointer dereference is way easier
> than potentially corrupted memory through a UAF.
>
> Signed-off-by: Danilo Krummrich <dakr@...hat.com>
This triggers a splat for me (with Panfrost driver), the cause of which
is the following code in drm_sched_get_cleanup_job():
if (job) {
job->entity->elapsed_ns += ktime_to_ns(
ktime_sub(job->s_fence->finished.timestamp,
job->s_fence->scheduled.timestamp));
}
which indeed is accessing entity after the job has been returned from
drm_sched_entity_pop_job(). And obviously job->entity is a NULL pointer
with this patch.
I'm afraid I don't fully understand the lifecycle so I'm not sure if
this is simply exposing an existing bug in drm_sched_get_cleanup_job()
or if this commit needs to be reverted.
Thanks,
Steve
> ---
> drivers/gpu/drm/scheduler/sched_entity.c | 6 ++++++
> drivers/gpu/drm/scheduler/sched_main.c | 4 ++++
> 2 files changed, 10 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index 15d04a0ec623..a9c6118e534b 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -448,6 +448,12 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
> drm_sched_rq_update_fifo(entity, next->submit_ts);
> }
>
> + /* Jobs and entities might have different lifecycles. Since we're
> + * removing the job from the entities queue, set the jobs entity pointer
> + * to NULL to prevent any future access of the entity through this job.
> + */
> + sched_job->entity = NULL;
> +
> return sched_job;
> }
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 9b16480686f6..e89a3e469cd5 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -42,6 +42,10 @@
> * the hardware.
> *
> * The jobs in a entity are always scheduled in the order that they were pushed.
> + *
> + * Note that once a job was taken from the entities queue and pushed to the
> + * hardware, i.e. the pending queue, the entity must not be referenced anymore
> + * through the jobs entity pointer.
> */
>
> #include <linux/kthread.h>
Powered by blists - more mailing lists