[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8c9e6931-e445-4df6-90e5-b648a3fa1b3b@linux.intel.com>
Date: Thu, 30 Mar 2023 16:10:37 +0100
From: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To: Andrzej Hajda <andrzej.hajda@...el.com>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>
Cc: intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
Chris Wilson <chris.p.wilson@...ux.intel.com>,
Andi Shyti <andi.shyti@...ux.intel.com>,
Chris Wilson <chris@...is-wilson.co.uk>
Subject: Re: [PATCH] drm/i915/gt: Hold a wakeref for the active VM
On 30/03/2023 15:35, Andrzej Hajda wrote:
> From: Chris Wilson <chris@...is-wilson.co.uk>
>
> There may be a disconnect between the GT used by the engine and the GT
> used for the VM, requiring us to hold a wakeref on both while the GPU is
> active with this request.
Presumably this is for MTL? "drm/i1915/mtl: ..." ?
> Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
> [ahajda: removed not-yet-upstremed wakeref tracking bits]
> Signed-off-by: Andrzej Hajda <andrzej.hajda@...el.com>
> ---
> drivers/gpu/drm/i915/gt/intel_context.h | 15 +++++++++++----
> drivers/gpu/drm/i915/gt/intel_engine_pm.c | 3 +++
> 2 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
> index 0a8d553da3f439..48f888c3da083b 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context.h
> @@ -14,6 +14,7 @@
> #include "i915_drv.h"
> #include "intel_context_types.h"
> #include "intel_engine_types.h"
> +#include "intel_gt_pm.h"
> #include "intel_ring_types.h"
> #include "intel_timeline_types.h"
> #include "i915_trace.h"
> @@ -207,8 +208,11 @@ void intel_context_exit_engine(struct intel_context *ce);
> static inline void intel_context_enter(struct intel_context *ce)
> {
> lockdep_assert_held(&ce->timeline->mutex);
> - if (!ce->active_count++)
> - ce->ops->enter(ce);
> + if (ce->active_count++)
> + return;
> +
> + ce->ops->enter(ce);
> + intel_gt_pm_get(ce->vm->gt);
> }
>
> static inline void intel_context_mark_active(struct intel_context *ce)
> @@ -222,8 +226,11 @@ static inline void intel_context_exit(struct intel_context *ce)
> {
> lockdep_assert_held(&ce->timeline->mutex);
> GEM_BUG_ON(!ce->active_count);
> - if (!--ce->active_count)
> - ce->ops->exit(ce);
> + if (--ce->active_count)
> + return;
> +
> + intel_gt_pm_put_async(ce->vm->gt);
> + ce->ops->exit(ce);
Above too are balance and plausible - media tile engine vs root tile VM,
although at the moment it escapes me what exactly would go bad, like
what gets powered down which is then access by the executing request and
fails how?
> }
>
> static inline struct intel_context *intel_context_get(struct intel_context *ce)
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> index e971b153fda976..ac0566c5e99e17 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> @@ -114,6 +114,9 @@ __queue_and_release_pm(struct i915_request *rq,
>
> ENGINE_TRACE(engine, "parking\n");
>
> + GEM_BUG_ON(rq->context->active_count != 1);
> + __intel_gt_pm_get(engine->gt);
> +
But this one I don't immediately get. Above we have get and put which is
balanced - fine, but where is this extra get coming from, who will put
it and where?
Regards,
Tvrtko
> /*
> * We have to serialise all potential retirement paths with our
> * submission, as we don't want to underflow either the
>
> ---
> base-commit: 3385d6482cd60f2a0bbb0fa97b70ae7dbba4f95c
> change-id: 20230330-hold_wakeref_for_active_vm-7f013a449ef3
>
> Best regards,
Powered by blists - more mailing lists