lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <45ba5373-6454-4687-b21c-46748a3a6e5d@redhat.com>
Date:   Mon, 13 Nov 2023 13:57:20 +0100
From:   Danilo Krummrich <dakr@...hat.com>
To:     Christian König <christian.koenig@....com>,
        Thomas Hellström 
        <thomas.hellstrom@...ux.intel.com>
Cc:     airlied@...il.com, daniel@...ll.ch, matthew.brost@...el.com,
        sarah.walker@...tec.com, donald.robson@...tec.com,
        boris.brezillon@...labora.com, faith@...strand.net,
        dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH drm-misc-next v8 09/12] drm/gpuvm: reference count
 drm_gpuvm structures

On 11/13/23 08:22, Christian König wrote:
> Am 10.11.23 um 17:57 schrieb Danilo Krummrich:
>> On 11/10/23 09:50, Christian König wrote:
>>> [SNIP]
>>>>>
>>>>>>
>>>>>> Another issue Christian brought up is that something intended to be embeddable (a base class) shouldn't really have its own refcount. I think that's a valid point. If you at some point need to derive from multiple such structs each having its own refcount, things will start to get weird. One way to resolve that would be to have the driver's subclass provide get() and put() ops, and export a destructor for the base-class, rather than to have the base-class provide the refcount and a destructor  ops.
>>>>
>>>> GPUVM simply follows the same pattern we have with drm_gem_objects. And I think it makes
>>>> sense. Why would we want to embed two struct drm_gpuvm in a single driver structure?
>>>
>>> Because you need one drm_gpuvm structure for each application using the driver? Or am I missing something?
>>
>> Right, *one*, but not more than one. Wasn't that the concern? Maybe I misunderstood something. :)
> 
> Well, there is the use case of native context with XEN/KVM. In that situation QEMU opens tons of driver file descriptors on behalves of the virtual environment clients.
> 
> In this use case you have many drm_gpuvm instances for a single application. So you can't assume that you only have one VM per PID/TGID or something like that.

Well, that's fine. I think Xe can have multiple VMs per PID as well. In this case you'd keep creating driver VM structures with a single GPUVM as base class. But not multiple GPUVMs serving as base class for a single driver structure, which I thought was the concern here. For the latter I can't see a use case.

> 
> AMD already made that mistake with KFD and I strongly suggest not to repeat it :)
> 
> Regards,
> Christian.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ