[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231110155858.4c92f05d@collabora.com>
Date: Fri, 10 Nov 2023 15:58:58 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Dmitry Osipenko <dmitry.osipenko@...labora.com>
Cc: David Airlie <airlied@...il.com>,
Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>,
Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Christian König <christian.koenig@....com>,
Qiang Yu <yuq825@...il.com>,
Steven Price <steven.price@....com>,
Emma Anholt <emma@...olt.net>, Melissa Wen <mwen@...lia.com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
kernel@...labora.com, virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v18 19/26] drm/shmem-helper: Add common memory shrinker
On Mon, 30 Oct 2023 02:01:58 +0300
Dmitry Osipenko <dmitry.osipenko@...labora.com> wrote:
> @@ -238,6 +308,20 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> if (refcount_dec_not_one(&shmem->pages_use_count))
> return;
>
> + /*
> + * Destroying the object is a special case because acquiring
> + * the obj lock can cause a locking order inversion between
> + * reservation_ww_class_mutex and fs_reclaim.
> + *
> + * This deadlock is not actually possible, because no one should
> + * be already holding the lock when GEM is released. Unfortunately
> + * lockdep is not aware of this detail. So when the refcount drops
> + * to zero, we pretend it is already locked.
> + */
> + if (!kref_read(&shmem->base.refcount) &&
> + refcount_dec_and_test(&shmem->pages_use_count))
> + return drm_gem_shmem_free_pages(shmem);
Uh, with get/put_pages() being moved to the create/free_gem()
hooks, we're back to a situation where pages_use_count > 0 when we
reach gem->refcount == 0, which is not nice. We really need to patch
drivers so they dissociate GEM creation from the backing storage
allocation/reservation + mapping of the BO in GPU VM space.
Powered by blists - more mailing lists