lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a52ac7c-19ba-8906-5902-fbf75673bf59@amd.com>
Date:   Fri, 23 Jun 2023 09:16:14 +0200
From:   Christian König <christian.koenig@....com>
To:     Danilo Krummrich <dakr@...hat.com>, airlied@...il.com,
        daniel@...ll.ch, tzimmermann@...e.de, mripard@...nel.org,
        corbet@....net, bskeggs@...hat.com, Liam.Howlett@...cle.com,
        matthew.brost@...el.com, boris.brezillon@...labora.com,
        alexdeucher@...il.com, ogabbay@...nel.org, bagasdotme@...il.com,
        willy@...radead.org, jason@...kstrand.net
Cc:     dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
        linux-doc@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Donald Robson <donald.robson@...tec.com>,
        Dave Airlie <airlied@...hat.com>
Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA
 mappings

Am 22.06.23 um 17:07 schrieb Danilo Krummrich:
> On 6/22/23 17:04, Danilo Krummrich wrote:
>> On 6/22/23 16:42, Christian König wrote:
>>> Am 22.06.23 um 16:22 schrieb Danilo Krummrich:
>>>> On 6/22/23 15:54, Christian König wrote:
>>>>> Am 20.06.23 um 14:23 schrieb Danilo Krummrich:
>>>>>> Hi Christian,
>>>>>>
>>>>>> On 6/20/23 08:45, Christian König wrote:
>>>>>>> Hi Danilo,
>>>>>>>
>>>>>>> sorry for the delayed reply. I've trying to dig myself out of a 
>>>>>>> hole at the moment.
>>>>>>
>>>>>> No worries, thank you for taking a look anyway!
>>>>>>
>>>>>>>
>>>>>>> Am 20.06.23 um 02:42 schrieb Danilo Krummrich:
>>>>>>>> [SNIP]
>>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>>> index bbc721870c13..5ec8148a30ee 100644
>>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>>> @@ -36,6 +36,8 @@
>>>>>>>>   #include <linux/kref.h>
>>>>>>>>   #include <linux/dma-resv.h>
>>>>>>>> +#include <linux/list.h>
>>>>>>>> +#include <linux/mutex.h>
>>>>>>>>   #include <drm/drm_vma_manager.h>
>>>>>>>> @@ -379,6 +381,18 @@ struct drm_gem_object {
>>>>>>>>        */
>>>>>>>>       struct dma_resv _resv;
>>>>>>>> +    /**
>>>>>>>> +     * @gpuva:
>>>>>>>> +     *
>>>>>>>> +     * Provides the list of GPU VAs attached to this GEM object.
>>>>>>>> +     *
>>>>>>>> +     * Drivers should lock list accesses with the GEMs 
>>>>>>>> &dma_resv lock
>>>>>>>> +     * (&drm_gem_object.resv).
>>>>>>>> +     */
>>>>>>>> +    struct {
>>>>>>>> +        struct list_head list;
>>>>>>>> +    } gpuva;
>>>>>>>> +
>>>>>>>>       /**
>>>>>>>>        * @funcs:
>>>>>>>>        *
>>>>>>>
>>>>>>> I'm pretty sure that it's not a good idea to attach this 
>>>>>>> directly to the GEM object.
>>>>>>
>>>>>> Why do you think so? IMHO having a common way to connect mappings 
>>>>>> to their backing buffers is a good thing, since every driver 
>>>>>> needs this connection anyway.
>>>>>>
>>>>>> E.g. when a BO gets evicted, drivers can just iterate the list of 
>>>>>> mappings and, as the circumstances require, invalidate the 
>>>>>> corresponding mappings or to unmap all existing mappings of a 
>>>>>> given buffer.
>>>>>>
>>>>>> What would be the advantage to let every driver implement a 
>>>>>> driver specific way of keeping this connection?
>>>>>
>>>>> Flexibility. For example on amdgpu the mappings of a BO are groups 
>>>>> by VM address spaces.
>>>>>
>>>>> E.g. the BO points to multiple bo_vm structures which in turn have 
>>>>> lists of their mappings.
>>>>
>>>> Isn't this (almost) the same relationship I introduce with the 
>>>> GPUVA manager?
>>>>
>>>> If you would switch over to the GPUVA manager right now, it would 
>>>> be that every GEM has a list of it's mappings (the gpuva list). The 
>>>> mapping is represented by struct drm_gpuva (of course embedded in 
>>>> driver specific structure(s)) which has a pointer to the VM address 
>>>> space it is part of, namely the GPUVA manager instance. And the 
>>>> GPUVA manager keeps a maple tree of it's mappings as well.
>>>>
>>>> If you still would like to *directly* (indirectly you already have 
>>>> that relationship) keep a list of GPUVA managers (VM address 
>>>> spaces) per GEM, you could still do that in a driver specific way.
>>>>
>>>> Do I miss something?
>>>
>>> How do you efficiently find only the mappings of a BO in one VM?
>>
>> Actually, I think this case should even be more efficient than with a 
>> BO having a list of GPUVAs (or mappings):
>
> *than with a BO having a list of VMs:
>
>>
>> Having a list of GPUVAs per GEM, each GPUVA has a pointer to it's VM. 
>> Hence, you'd only need to iterate the list of mappings for a given BO 
>> and check the mappings VM pointer.

Yeah, and that is extremely time consuming if you have tons of mappings 
in different VMs.

>>
>> Having a list of VMs per BO, you'd have to iterate the whole VM to 
>> find the mappings having a pointer to the given BO, right?

No, you don't seem to understand what I'm suggesting.

Currently you have a list of mappings attached to the BO, so when you 
need to make sure that a specific BO is up to date in a specific VM you 
either need to iterate over the VM or the BO. Neither of that is a good 
idea.

What you need is a representation of the data used for each BO+VM 
combination. In other words another indirection which allows you to 
handle all the mappings of a BO inside a VM at once.

>>
>> I'd think that a single VM potentially has more mapping entries than 
>> a single BO was mapped in multiple VMs.
>>
>> Another case to consider is the case I originally had in mind 
>> choosing this relationship: finding all mappings for a given BO, 
>> which I guess all drivers need to do in order to invalidate mappings 
>> on BO eviction.
>>
>> Having a list of VMs per BO, wouldn't you need to iterate all of the 
>> VMs entirely?

No, see how amdgpu works.

Regards,
Christian.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ