[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41aecd10-9956-0752-2838-34c97834f0c8@amd.com>
Date: Thu, 22 Jun 2023 15:54:51 +0200
From: Christian König <christian.koenig@....com>
To: Danilo Krummrich <dakr@...hat.com>, airlied@...il.com,
daniel@...ll.ch, tzimmermann@...e.de, mripard@...nel.org,
corbet@....net, bskeggs@...hat.com, Liam.Howlett@...cle.com,
matthew.brost@...el.com, boris.brezillon@...labora.com,
alexdeucher@...il.com, ogabbay@...nel.org, bagasdotme@...il.com,
willy@...radead.org, jason@...kstrand.net
Cc: dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Donald Robson <donald.robson@...tec.com>,
Dave Airlie <airlied@...hat.com>
Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA
mappings
Am 20.06.23 um 14:23 schrieb Danilo Krummrich:
> Hi Christian,
>
> On 6/20/23 08:45, Christian König wrote:
>> Hi Danilo,
>>
>> sorry for the delayed reply. I've trying to dig myself out of a hole
>> at the moment.
>
> No worries, thank you for taking a look anyway!
>
>>
>> Am 20.06.23 um 02:42 schrieb Danilo Krummrich:
>>> [SNIP]
>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>> index bbc721870c13..5ec8148a30ee 100644
>>> --- a/include/drm/drm_gem.h
>>> +++ b/include/drm/drm_gem.h
>>> @@ -36,6 +36,8 @@
>>> #include <linux/kref.h>
>>> #include <linux/dma-resv.h>
>>> +#include <linux/list.h>
>>> +#include <linux/mutex.h>
>>> #include <drm/drm_vma_manager.h>
>>> @@ -379,6 +381,18 @@ struct drm_gem_object {
>>> */
>>> struct dma_resv _resv;
>>> + /**
>>> + * @gpuva:
>>> + *
>>> + * Provides the list of GPU VAs attached to this GEM object.
>>> + *
>>> + * Drivers should lock list accesses with the GEMs &dma_resv lock
>>> + * (&drm_gem_object.resv).
>>> + */
>>> + struct {
>>> + struct list_head list;
>>> + } gpuva;
>>> +
>>> /**
>>> * @funcs:
>>> *
>>
>> I'm pretty sure that it's not a good idea to attach this directly to
>> the GEM object.
>
> Why do you think so? IMHO having a common way to connect mappings to
> their backing buffers is a good thing, since every driver needs this
> connection anyway.
>
> E.g. when a BO gets evicted, drivers can just iterate the list of
> mappings and, as the circumstances require, invalidate the
> corresponding mappings or to unmap all existing mappings of a given
> buffer.
>
> What would be the advantage to let every driver implement a driver
> specific way of keeping this connection?
Flexibility. For example on amdgpu the mappings of a BO are groups by VM
address spaces.
E.g. the BO points to multiple bo_vm structures which in turn have lists
of their mappings.
Additional to that there is a state maschine associated with the
mappings, e.g. if the page tables are up to date or need to be updated
etc....
> Do you see cases where this kind of connection between mappings and
> backing buffers wouldn't be good enough? If so, which cases do you
> have in mind? Maybe we can cover them in a common way as well?
Yeah, we have tons of cases like that. But I have no idea how to
generalize them.
>
>>
>> As you wrote in the commit message it's highly driver specific what
>> to map and where to map it.
>
> In the end the common case should be that in a VA space at least every
> mapping being backed by a BO is represented by a struct drm_gpuva.
Oh, no! We also have mappings not backed by a BO at all! For example for
partial resident textures or data routing to internal hw etc...
You can't be sure that a mapping is backed by a BO at all.
>
>>
>> Instead I suggest to have a separate structure for mappings in a VA
>> space which driver can then add to their GEM objects or whatever they
>> want to map into their VMs.
>
> Which kind of separate structure for mappings? Another one analogous
> to struct drm_gpuva?
Well similar to what amdgpu uses BO -> one structure for each
combination of BO and VM -> mappings inside that VM
Regards,
Christian.
>
> - Danilo
>
>>
>> Regards,
>> Christian.
>>
>>
>
Powered by blists - more mailing lists