[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230903170736.513347-6-dmitry.osipenko@collabora.com>
Date: Sun, 3 Sep 2023 20:07:21 +0300
From: Dmitry Osipenko <dmitry.osipenko@...labora.com>
To: David Airlie <airlied@...il.com>,
Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>,
Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Christian König <christian.koenig@....com>,
Qiang Yu <yuq825@...il.com>,
Steven Price <steven.price@....com>,
Boris Brezillon <boris.brezillon@...labora.com>,
Emma Anholt <emma@...olt.net>, Melissa Wen <mwen@...lia.com>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
kernel@...labora.com, virtualization@...ts.linux-foundation.org
Subject: [PATCH v16 05/20] drm/v3d: Replace open-coded drm_gem_shmem_free() with drm_gem_object_put()
The drm_gem_shmem_free() doesn't put GEM's kref to zero, which becomes
important with addition of the shrinker support to drm-shmem that will
use kref=0 in order to prevent taking lock during special GEM-freeing
time in order to avoid spurious lockdep warning about locking ordering
vs fs_reclaim code paths.
Replace open-coded drm_gem_shmem_free() with drm_gem_object_put() that
drops kref to zero before freeing GEM.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@...labora.com>
---
drivers/gpu/drm/v3d/v3d_bo.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
index 8b3229a37c6d..70c1095d6eec 100644
--- a/drivers/gpu/drm/v3d/v3d_bo.c
+++ b/drivers/gpu/drm/v3d/v3d_bo.c
@@ -33,16 +33,18 @@ void v3d_free_object(struct drm_gem_object *obj)
struct v3d_dev *v3d = to_v3d_dev(obj->dev);
struct v3d_bo *bo = to_v3d_bo(obj);
- v3d_mmu_remove_ptes(bo);
+ if (drm_mm_node_allocated(&bo->node)) {
+ v3d_mmu_remove_ptes(bo);
- mutex_lock(&v3d->bo_lock);
- v3d->bo_stats.num_allocated--;
- v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT;
- mutex_unlock(&v3d->bo_lock);
+ mutex_lock(&v3d->bo_lock);
+ v3d->bo_stats.num_allocated--;
+ v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT;
+ mutex_unlock(&v3d->bo_lock);
- spin_lock(&v3d->mm_lock);
- drm_mm_remove_node(&bo->node);
- spin_unlock(&v3d->mm_lock);
+ spin_lock(&v3d->mm_lock);
+ drm_mm_remove_node(&bo->node);
+ spin_unlock(&v3d->mm_lock);
+ }
/* GPU execution may have dirtied any pages in the BO. */
bo->base.pages_mark_dirty_on_put = true;
@@ -142,7 +144,7 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv,
return bo;
free_obj:
- drm_gem_shmem_free(shmem_obj);
+ drm_gem_object_put(&shmem_obj->base);
return ERR_PTR(ret);
}
@@ -160,7 +162,7 @@ v3d_prime_import_sg_table(struct drm_device *dev,
ret = v3d_bo_create_finish(obj);
if (ret) {
- drm_gem_shmem_free(&to_v3d_bo(obj)->base);
+ drm_gem_object_put(obj);
return ERR_PTR(ret);
}
--
2.41.0
Powered by blists - more mailing lists