[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8ed4b8b-5116-4ac2-bfce-21b2751f7377@suse.de>
Date: Thu, 3 Apr 2025 09:20:00 +0200
From: Thomas Zimmermann <tzimmermann@...e.de>
To: Boris Brezillon <boris.brezillon@...labora.com>,
Dmitry Osipenko <dmitry.osipenko@...labora.com>
Cc: David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Christian König <christian.koenig@....com>,
Gerd Hoffmann <kraxel@...hat.com>, Qiang Yu <yuq825@...il.com>,
Steven Price <steven.price@....com>, Frank Binns <frank.binns@...tec.com>,
Matt Coster <matt.coster@...tec.com>, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, kernel@...labora.com
Subject: Re: [PATCH v20 09/10] drm/shmem-helper: Switch
drm_gem_shmem_vmap/vunmap to use pin/unpin
Hi
Am 02.04.25 um 15:21 schrieb Boris Brezillon:
> On Wed, 2 Apr 2025 15:58:55 +0300
> Dmitry Osipenko <dmitry.osipenko@...labora.com> wrote:
>
>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>> The vmapped pages shall be pinned in memory and previously get/
>>>> put_pages()
>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>> won't
>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>> pinned),
>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>> of the memory shrinker support to drm-shmem.
>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>> pinned, but that's not automatic. For other vmaps it is necessary to
>>> hold the reservation lock to prevent the buffer from moving.
> Hm, is this problematic though? If you want to vmap() inside a section
> that's protected by the resv lock, you can
>
> - drm_gem_shmem_vmap_locked()
> - do whatever you need to do with the vaddr,
> - drm_gem_shmem_vunmap_locked()
>
> and the {pin,page_use}_count will be back to their original values.
> Those are just ref counters, and I doubt the overhead of
> incrementing/decrementing them makes a difference compared to the heavy
> page-allocation/vmap operations...
I once tried to add pin as part of vmap, so that pages stay in place.
Christian was very clear about not doing this. I found this made a lot
of sense: vmap means "make the memory available to the CPU". The memory
location doesn't matter much here. Pin means something like "make the
memory available to the GPU". But which GPU depends on the caller: calls
via GEM refer to the local GPU, calls via dma-buf usually refer to the
importer's GPU. That GPU uncertainty makes pin problematic already.
In your case, vmap an pin both intent to hold the shmem pages in memory.
They might be build on top of the same implementation, but one should
not be implemented with the other because of their different meanings.
More generally speaking, I've meanwhile come to the conclusion that pin
should not even exist in the GEM interface. It's an internal operation
of TTM and reveals too much about what happens within the
implementation. Instead GEM should be free to move buffers around.
Dma-buf importers should only tell exporters to make buffers available
to them, but not how to do this. AFAIK that's what dma-buf's
attach/detach is for.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
Powered by blists - more mailing lists