[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201012143555.GA438822@phenom.ffwll.local>
Date: Mon, 12 Oct 2020 16:35:55 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel@...ts.freedesktop.org, Daniel Vetter <daniel@...ll.ch>,
Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
"open list:DRM DRIVER FOR MSM ADRENO GPU"
<linux-arm-msm@...r.kernel.org>,
"open list:DRM DRIVER FOR MSM ADRENO GPU"
<freedreno@...ts.freedesktop.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 07/22] drm/msm: Do rpm get sooner in the submit path
On Sun, Oct 11, 2020 at 07:09:34PM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@...omium.org>
>
> Unfortunately, due to an dev_pm_opp locking interaction with
> mm->mmap_sem, we need to do pm get before aquiring obj locks,
> otherwise we can have anger lockdep with the chain:
tbh this sounds like a bug in that subsystem, since it means we cannot use
said subsystem in mmap handlers either.
So if you have some remapping unit or need to wake up your gpu to blt the
buffer into system memory first, we're toast. That doesn't sound right. So
maybe Cc: pm folks and figure out how to fix this long term properly? Imo
not a good reason to hold up this patch set, since unwrangling mmap_sem
tends to be work ...
-Daniel
>
> opp_table_lock --> &mm->mmap_sem --> reservation_ww_class_mutex
>
> For an explicit fencing userspace, the impact should be minimal
> as we do all the fence waits before this point. It could result
> in some needless resumes in error cases, etc.
>
> Signed-off-by: Rob Clark <robdclark@...omium.org>
> ---
> drivers/gpu/drm/msm/msm_gem_submit.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 002130d826aa..a9422d043bfe 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -744,11 +744,20 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
>
> ret = submit_lookup_objects(submit, args, file);
> if (ret)
> - goto out;
> + goto out_pre_pm;
>
> ret = submit_lookup_cmds(submit, args, file);
> if (ret)
> - goto out;
> + goto out_pre_pm;
> +
> + /*
> + * Thanks to dev_pm_opp opp_table_lock interactions with mm->mmap_sem
> + * in the resume path, we need to to rpm get before we lock objs.
> + * Which unfortunately might involve powering up the GPU sooner than
> + * is necessary. But at least in the explicit fencing case, we will
> + * have already done all the fence waiting.
> + */
> + pm_runtime_get_sync(&gpu->pdev->dev);
>
> /* copy_*_user while holding a ww ticket upsets lockdep */
> ww_acquire_init(&submit->ticket, &reservation_ww_class);
> @@ -825,6 +834,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
>
>
> out:
> + pm_runtime_put(&gpu->pdev->dev);
> +out_pre_pm:
> submit_cleanup(submit);
> if (has_ww_ticket)
> ww_acquire_fini(&submit->ticket);
> --
> 2.26.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists