[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <36c9f669-c2d2-8a63-db96-31165caeeffb@codeaurora.org>
Date: Mon, 15 Nov 2021 20:13:47 +0530
From: Akhil P Oommen <akhilpo@...eaurora.org>
To: Rob Clark <robdclark@...il.com>, dri-devel@...ts.freedesktop.org
Cc: Rob Clark <robdclark@...omium.org>,
David Airlie <airlied@...ux.ie>, linux-arm-msm@...r.kernel.org,
Christian König <christian.koenig@....com>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>, Sean Paul <sean@...rly.run>,
freedreno@...ts.freedesktop.org,
open list <linux-kernel@...r.kernel.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>
Subject: Re: [PATCH 2/2] drm/msm: Restore error return on invalid fence
On 11/12/2021 12:54 AM, Rob Clark wrote:
> From: Rob Clark <robdclark@...omium.org>
>
> When converting to use an idr to map userspace fence seqno values back
> to a dma_fence, we lost the error return when userspace passes seqno
> that is larger than the last submitted fence. Restore this check.
>
> Reported-by: Akhil P Oommen <akhilpo@...eaurora.org>
> Fixes: a61acbbe9cf8 ("drm/msm: Track "seqno" fences by idr")
> Signed-off-by: Rob Clark <robdclark@...omium.org>
> ---
> Note: I will rebase "drm/msm: Handle fence rollover" on top of this,
> to simplify backporting this patch to stable kernels
>
> drivers/gpu/drm/msm/msm_drv.c | 6 ++++++
> drivers/gpu/drm/msm/msm_gem_submit.c | 1 +
> drivers/gpu/drm/msm/msm_gpu.h | 3 +++
> 3 files changed, 10 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index cb14d997c174..56500eb5219e 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -967,6 +967,12 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id,
> struct dma_fence *fence;
> int ret;
>
> + if (fence_id > queue->last_fence) {
But fence_id can wrap around and then this check won't be valid.
-Akhil.
> + DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n",
> + fence_id, queue->last_fence);
> + return -EINVAL;
> + }
> +
> /*
> * Map submitqueue scoped "seqno" (which is actually an idr key)
> * back to underlying dma-fence
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 151d19e4453c..a38f23be497d 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -911,6 +911,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
> drm_sched_entity_push_job(&submit->base, queue->entity);
>
> args->fence = submit->fence_id;
> + queue->last_fence = submit->fence_id;
>
> msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs);
> msm_process_post_deps(post_deps, args->nr_out_syncobjs,
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index bd4e0024033e..e73a5bb03544 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -376,6 +376,8 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio,
> * @ring_nr: the ringbuffer used by this submitqueue, which is determined
> * by the submitqueue's priority
> * @faults: the number of GPU hangs associated with this submitqueue
> + * @last_fence: the sequence number of the last allocated fence (for error
> + * checking)
> * @ctx: the per-drm_file context associated with the submitqueue (ie.
> * which set of pgtables do submits jobs associated with the
> * submitqueue use)
> @@ -391,6 +393,7 @@ struct msm_gpu_submitqueue {
> u32 flags;
> u32 ring_nr;
> int faults;
> + uint32_t last_fence;
> struct msm_file_private *ctx;
> struct list_head node;
> struct idr fence_idr;
>
Powered by blists - more mailing lists