[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201005140203.GS438822@phenom.ffwll.local>
Date: Mon, 5 Oct 2020 16:02:03 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Hillf Danton <hdanton@...a.com>
Cc: Rob Clark <robdclark@...il.com>, dri-devel@...ts.freedesktop.org,
Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 13/14] drm/msm: Drop struct_mutex in shrinker path
On Mon, Oct 05, 2020 at 05:24:19PM +0800, Hillf Danton wrote:
>
> On Sun, 4 Oct 2020 12:21:45
> > From: Rob Clark <robdclark@...omium.org>
> >
> > Now that the inactive_list is protected by mm_lock, and everything
> > else on per-obj basis is protected by obj->lock, we no longer depend
> > on struct_mutex.
> >
> > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > ---
> > drivers/gpu/drm/msm/msm_gem.c | 1 -
> > drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 --------------------------
> > 2 files changed, 55 deletions(-)
> >
> [...]
>
> > @@ -71,13 +33,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
> > {
> > struct msm_drm_private *priv =
> > container_of(shrinker, struct msm_drm_private, shrinker);
> > - struct drm_device *dev = priv->dev;
> > struct msm_gem_object *msm_obj;
> > unsigned long freed = 0;
> > - bool unlock;
> > -
> > - if (!msm_gem_shrinker_lock(dev, &unlock))
> > - return SHRINK_STOP;
> >
> > mutex_lock(&priv->mm_lock);
>
> Better if the change in behavior is documented that SHRINK_STOP will
> no longer be needed.
btw I read through this and noticed you have your own obj lock, plus
mutex_lock_nested. I strongly recommend to just cut over to dma_resv_lock
for all object lock needs (soc drivers have been terrible with this
unfortuntaly), and in the shrinker just use dma_resv_trylock instead of
trying to play clever games outsmarting lockdep.
I recently wrote an entire blog length rant on why I think
mutex_lock_nested is too dangerous to be useful:
https://blog.ffwll.ch/2020/08/lockdep-false-positives.html
Not anything about this here, just general comment. The problem extends to
shmem helpers and all that also having their own locks for everything.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists