lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1333e15b-f229-460a-8965-01ff3e778a4d@amd.com>
Date:   Thu, 12 Oct 2023 14:35:15 +0200
From:   Christian König <christian.koenig@....com>
To:     Dave Airlie <airlied@...il.com>
Cc:     Thomas Hellström (Intel) 
        <thomas_os@...pmail.org>, Danilo Krummrich <dakr@...hat.com>,
        daniel@...ll.ch, matthew.brost@...el.com,
        thomas.hellstrom@...ux.intel.com, sarah.walker@...tec.com,
        donald.robson@...tec.com, boris.brezillon@...labora.com,
        faith.ekstrand@...labora.com, bskeggs@...hat.com,
        Liam.Howlett@...cle.com, nouveau@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize
 dma_resv/extobj handling and GEM validation

Am 12.10.23 um 12:33 schrieb Dave Airlie:
> On Wed, 11 Oct 2023 at 17:07, Christian König <christian.koenig@....com> wrote:
>> Am 10.10.23 um 22:23 schrieb Dave Airlie:
>>>> I think we're then optimizing for different scenarios. Our compute
>>>> driver will use mostly external objects only, and if shared, I don't
>>>> forsee them bound to many VMs. What saves us currently here is that in
>>>> compute mode we only really traverse the extobj list after a preempt
>>>> fence wait, or when a vm is using a new context for the first time. So
>>>> vm's extobj list is pretty large. Each bo's vma list will typically be
>>>> pretty small.
>>> Can I ask why we are optimising for this userspace, this seems
>>> incredibly broken.
>>>
>>> We've has this sort of problem in the past with Intel letting the tail
>>> wag the horse, does anyone remember optimising relocations for a
>>> userspace that didn't actually need to use relocations?
>>>
>>> We need to ask why this userspace is doing this, can we get some
>>> pointers to it? compute driver should have no reason to use mostly
>>> external objects, the OpenCL and level0 APIs should be good enough to
>>> figure this out.
>> Well that is pretty normal use case, AMD works the same way.
>>
>> In a multi GPU compute stack you have mostly all the data shared between
>> different hardware devices.
>>
>> As I said before looking at just the Vulcan use case is not a good idea
>> at all.
>>
> It's okay, I don't think anyone is doing that, some of the these
> use-cases are buried in server land and you guys don't communicate
> them very well.

Yeah, well everybody is trying very hard to get away from those 
approaches :)

But so far there hasn't been any breakthrough.

>
> multi-gpu compute would I'd hope be moving towards HMM/SVM type
> solutions though?

Unfortunately not in the foreseeable future. HMM seems more and more 
like a dead end, at least for AMD.

AMD still has hardware support in all of their MI* products, but for 
Navi the features necessary for implementing HMM have been dropped. And 
it looks more and more like their are not going to come back.

Additional to that from the software side Felix summarized it in the HMM 
peer2peer discussion thread recently quite well. A buffer object based 
approach is not only simpler to handle, but also performant vise 
multiple magnitudes faster.

> I'm also not into looking at use-cases that used to be important but
> might not as important going forward.

Well multimedia applications and OpenGL are still around, but it's not 
the main focus any more.

Christian.

>
> Dave.
>
>
>> Christian.
>>
>>> Dave.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ