[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230802110626.5035c228@collabora.com>
Date: Wed, 2 Aug 2023 11:06:26 +0200
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Danilo Krummrich <dakr@...hat.com>
Cc: Dmitry Osipenko <dmitry.osipenko@...labora.com>,
David Airlie <airlied@...il.com>,
Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>,
Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Christian König <christian.koenig@....com>,
Qiang Yu <yuq825@...il.com>,
Steven Price <steven.price@....com>,
Emma Anholt <emma@...olt.net>, Melissa Wen <mwen@...lia.com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
kernel@...labora.com, virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
On Wed, 2 Aug 2023 04:31:52 +0200
Danilo Krummrich <dakr@...hat.com> wrote:
> On 7/31/23 15:35, Boris Brezillon wrote:
> > +Danilo, to confirm my understanding of the gpuva remap operation is
> > correct.
>
> Your understanding is correct.
>
> Unfortunately, re-mapping things has such implications.
>
> I'm currently working on tracking external GEM objects in the GPUVA
> manager, where, ideally, you'd want to add the extobj to the VM when the
> first mapping being backed by this GEM is created and removed when the
> last mapping being backed by this GEM is removed. Hence, extobjs need to
> be ref-counted based on how many mappings they back.
Uh, right. I went for a much simpler (but also less efficient) approach
where I basically track things at the mapping level (my panthor_vma
object, which inherits from drm_gpuva, has a list node so it can be
inserted in a shared_bos list tracked at the VM level), instead of the
GEM level. So we'd basically be trying to acquire resv locks multiple
times and reserving multiple slots if the same shared GEM is mapped
multiple times. With the IGNORE_DUPLICATES flag passed to drm_exec,
that works, but it might not be ideal if we expect shared BOs to be
mapped multiple times in the same VM.
>
> However, when re-mapping such a mapping, the reference counter might
> drop to 0 temporarily and the slot of the data structure tracking the
> extobj is cleaned up and needs to be re-allocated. Surely, we could just
> increase the reference count while re-mapping or for the whole
> transaction (job), but this would make the API kinda bulky.
With things happening in the dma-signaling path, we'd need to
pre-allocate this shared-bo container object anyway, because we can't
assume there will be one available by the time we get to run the VM
operation. So I think it's safe to assume that, even if the unmap part
of the remap operation drops the last ref of this container object, when
you get to map the same BO again, you'll have another container to play
with. It's just a matter of pre-allocating one more thing when
bo_is_shared==true && op==map, I think.
Powered by blists - more mailing lists