[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230904095217.06eb80f0@collabora.com>
Date: Mon, 4 Sep 2023 09:52:17 +0200
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Dmitry Osipenko <dmitry.osipenko@...labora.com>
Cc: David Airlie <airlied@...il.com>,
Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>,
Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Christian König <christian.koenig@....com>,
Qiang Yu <yuq825@...il.com>,
Steven Price <steven.price@....com>,
Emma Anholt <emma@...olt.net>, Melissa Wen <mwen@...lia.com>,
Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Mark Rutland <mark.rutland@....com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
kernel@...labora.com, virtualization@...ts.linux-foundation.org,
intel-gfx@...ts.freedesktop.org
Subject: Re: [PATCH v15 02/23] drm/shmem-helper: Use flag for tracking page
count bumped by get_pages_sgt()
On Sat, 2 Sep 2023 21:28:21 +0300
Dmitry Osipenko <dmitry.osipenko@...labora.com> wrote:
> On 8/28/23 13:55, Boris Brezillon wrote:
> > On Sun, 27 Aug 2023 20:54:28 +0300
> > Dmitry Osipenko <dmitry.osipenko@...labora.com> wrote:
> >
> >> Use separate flag for tracking page count bumped by shmem->sgt to avoid
> >> imbalanced page counter during of drm_gem_shmem_free() time. It's fragile
> >> to assume that populated shmem->pages at a freeing time means that the
> >> count was bumped by drm_gem_shmem_get_pages_sgt(), using a flag removes
> >> the ambiguity.
> >>
> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@...labora.com>
> >> ---
> >> drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
> >> drivers/gpu/drm/lima/lima_gem.c | 1 +
> >> include/drm/drm_gem_shmem_helper.h | 7 +++++++
> >> 3 files changed, 10 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> index 78d9cf2355a5..db20b9123891 100644
> >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> >> @@ -152,7 +152,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> >> sg_free_table(shmem->sgt);
> >> kfree(shmem->sgt);
> >> }
> >> - if (shmem->pages)
> >> + if (shmem->got_sgt)
> >> drm_gem_shmem_put_pages(shmem);
> >
> > Can't we just move this drm_gem_shmem_put_pages() call in the
> > if (shmem->sgt) block?
>
> As you've seen in patch #1, the shmem->sgt may belong to imported dmabuf
> and pages aren't referenced in this case.
Unless I'm wrong, you're already in the if (!import_attach) branch
here, so shmem->sgt should not be a dmabuf sgt.
>
> I agree that the freeing code is confusing. The flags make it a better,
> not ideal. Though, the flags+comments solution is good enough to me.
But what's the point of adding a flag when you can just do an
if (!shmem->import_attach && shmem->sgt) check. At best, it just
confuses people as to what these fields mean/are used for (especially
when the field has such a generic name, when what you want is actually
something like ->got_sgt_for_non_imported_object). But the most
problematic aspect is that it adds fields to maintain, and those might
end up being inconsistent with the object state because
new/driver-specific code forgot to update them.
> Please let me know if you have more suggestions, otherwise I'll add
> comment to the code and keep this patch for v16.
I'd definitely prefer adding the following helper
static bool has_implicit_pages_ref(struct drm_gem_shmem_object *shmem)
{
return !shmem->import_attach && shmem->sgt;
}
which provides the same logic without adding a new field/flag.
>
> BTW, I realized that the new flag wasn't placed properly in the Lima
> driver, causing unbalanced page count in the error path. Will correct it
> in v16.
See, that's the sort of subtle bugs I'm talking about. If the state is
inferred from other fields that can't happen.
Powered by blists - more mailing lists