lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8edf75b-e0f7-a6c7-7d29-f0d39923549b@redhat.com>
Date:   Tue, 20 Jun 2023 14:23:25 +0200
From:   Danilo Krummrich <dakr@...hat.com>
To:     Christian König <christian.koenig@....com>,
        airlied@...il.com, daniel@...ll.ch, tzimmermann@...e.de,
        mripard@...nel.org, corbet@....net, bskeggs@...hat.com,
        Liam.Howlett@...cle.com, matthew.brost@...el.com,
        boris.brezillon@...labora.com, alexdeucher@...il.com,
        ogabbay@...nel.org, bagasdotme@...il.com, willy@...radead.org,
        jason@...kstrand.net
Cc:     dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
        linux-doc@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Donald Robson <donald.robson@...tec.com>,
        Dave Airlie <airlied@...hat.com>
Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA
 mappings

Hi Christian,

On 6/20/23 08:45, Christian König wrote:
> Hi Danilo,
> 
> sorry for the delayed reply. I've trying to dig myself out of a hole at 
> the moment.

No worries, thank you for taking a look anyway!

> 
> Am 20.06.23 um 02:42 schrieb Danilo Krummrich:
>> [SNIP]
>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>> index bbc721870c13..5ec8148a30ee 100644
>> --- a/include/drm/drm_gem.h
>> +++ b/include/drm/drm_gem.h
>> @@ -36,6 +36,8 @@
>>   #include <linux/kref.h>
>>   #include <linux/dma-resv.h>
>> +#include <linux/list.h>
>> +#include <linux/mutex.h>
>>   #include <drm/drm_vma_manager.h>
>> @@ -379,6 +381,18 @@ struct drm_gem_object {
>>        */
>>       struct dma_resv _resv;
>> +    /**
>> +     * @gpuva:
>> +     *
>> +     * Provides the list of GPU VAs attached to this GEM object.
>> +     *
>> +     * Drivers should lock list accesses with the GEMs &dma_resv lock
>> +     * (&drm_gem_object.resv).
>> +     */
>> +    struct {
>> +        struct list_head list;
>> +    } gpuva;
>> +
>>       /**
>>        * @funcs:
>>        *
> 
> I'm pretty sure that it's not a good idea to attach this directly to the 
> GEM object.

Why do you think so? IMHO having a common way to connect mappings to 
their backing buffers is a good thing, since every driver needs this 
connection anyway.

E.g. when a BO gets evicted, drivers can just iterate the list of 
mappings and, as the circumstances require, invalidate the corresponding 
mappings or to unmap all existing mappings of a given buffer.

What would be the advantage to let every driver implement a driver 
specific way of keeping this connection? Do you see cases where this 
kind of connection between mappings and backing buffers wouldn't be good 
enough? If so, which cases do you have in mind? Maybe we can cover them 
in a common way as well?

> 
> As you wrote in the commit message it's highly driver specific what to 
> map and where to map it.

In the end the common case should be that in a VA space at least every 
mapping being backed by a BO is represented by a struct drm_gpuva.

> 
> Instead I suggest to have a separate structure for mappings in a VA 
> space which driver can then add to their GEM objects or whatever they 
> want to map into their VMs.

Which kind of separate structure for mappings? Another one analogous to 
struct drm_gpuva?

- Danilo

> 
> Regards,
> Christian.
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ