[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9637309c-af25-4117-be4f-b8cbdc087d60@gmail.com>
Date: Mon, 11 Dec 2023 11:47:06 +0100
From: Christian König <ckoenig.leichtzumerken@...il.com>
To: Rob Clark <robdclark@...il.com>, dri-devel@...ts.freedesktop.org
Cc: linux-arm-msm@...r.kernel.org, freedreno@...ts.freedesktop.org,
Christian König <christian.koenig@....com>,
Rob Clark <robdclark@...omium.org>,
Luben Tuikov <ltuikov89@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb"
<linux-media@...r.kernel.org>,
"moderated list:DMA BUFFER SHARING
FRAMEWORK:Keyword:bdma_?:buf|fence|resvb"
<linaro-mm-sig@...ts.linaro.org>
Subject: Re: [Linaro-mm-sig] [PATCH] drm/scheduler: Unwrap job dependencies
Am 05.12.23 um 20:02 schrieb Rob Clark:
> From: Rob Clark <robdclark@...omium.org>
>
> Container fences have burner contexts, which makes the trick to store at
> most one fence per context somewhat useless if we don't unwrap array or
> chain fences.
>
> Signed-off-by: Rob Clark <robdclark@...omium.org>
Reviewed-by: Christian König <christian.koenig@....com>
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 47 ++++++++++++++++++--------
> 1 file changed, 32 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 9762464e3f99..16b550949c57 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -52,6 +52,7 @@
> #include <linux/wait.h>
> #include <linux/sched.h>
> #include <linux/completion.h>
> +#include <linux/dma-fence-unwrap.h>
> #include <linux/dma-resv.h>
> #include <uapi/linux/sched/types.h>
>
> @@ -684,27 +685,14 @@ void drm_sched_job_arm(struct drm_sched_job *job)
> }
> EXPORT_SYMBOL(drm_sched_job_arm);
>
> -/**
> - * drm_sched_job_add_dependency - adds the fence as a job dependency
> - * @job: scheduler job to add the dependencies to
> - * @fence: the dma_fence to add to the list of dependencies.
> - *
> - * Note that @fence is consumed in both the success and error cases.
> - *
> - * Returns:
> - * 0 on success, or an error on failing to expand the array.
> - */
> -int drm_sched_job_add_dependency(struct drm_sched_job *job,
> - struct dma_fence *fence)
> +static int drm_sched_job_add_single_dependency(struct drm_sched_job *job,
> + struct dma_fence *fence)
> {
> struct dma_fence *entry;
> unsigned long index;
> u32 id = 0;
> int ret;
>
> - if (!fence)
> - return 0;
> -
> /* Deduplicate if we already depend on a fence from the same context.
> * This lets the size of the array of deps scale with the number of
> * engines involved, rather than the number of BOs.
> @@ -728,6 +716,35 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job,
>
> return ret;
> }
> +
> +/**
> + * drm_sched_job_add_dependency - adds the fence as a job dependency
> + * @job: scheduler job to add the dependencies to
> + * @fence: the dma_fence to add to the list of dependencies.
> + *
> + * Note that @fence is consumed in both the success and error cases.
> + *
> + * Returns:
> + * 0 on success, or an error on failing to expand the array.
> + */
> +int drm_sched_job_add_dependency(struct drm_sched_job *job,
> + struct dma_fence *fence)
> +{
> + struct dma_fence_unwrap iter;
> + struct dma_fence *f;
> + int ret = 0;
> +
> + dma_fence_unwrap_for_each (f, &iter, fence) {
> + dma_fence_get(f);
> + ret = drm_sched_job_add_single_dependency(job, f);
> + if (ret)
> + break;
> + }
> +
> + dma_fence_put(fence);
> +
> + return ret;
> +}
> EXPORT_SYMBOL(drm_sched_job_add_dependency);
>
> /**
Powered by blists - more mailing lists