[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM=9twSHGRoSoXxG+hz1T8iBX2VgPFvFsNCDnK_nHW9WJYBtw@mail.gmail.com>
Date: Thu, 12 Oct 2023 20:33:13 +1000
From: Dave Airlie <airlied@...il.com>
To: Christian König <christian.koenig@....com>
Cc: Thomas Hellström (Intel)
<thomas_os@...pmail.org>, Danilo Krummrich <dakr@...hat.com>,
daniel@...ll.ch, matthew.brost@...el.com,
thomas.hellstrom@...ux.intel.com, sarah.walker@...tec.com,
donald.robson@...tec.com, boris.brezillon@...labora.com,
faith.ekstrand@...labora.com, bskeggs@...hat.com,
Liam.Howlett@...cle.com, nouveau@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH drm-misc-next 2/3] drm/gpuva_mgr: generalize
dma_resv/extobj handling and GEM validation
On Wed, 11 Oct 2023 at 17:07, Christian König <christian.koenig@....com> wrote:
>
> Am 10.10.23 um 22:23 schrieb Dave Airlie:
> >> I think we're then optimizing for different scenarios. Our compute
> >> driver will use mostly external objects only, and if shared, I don't
> >> forsee them bound to many VMs. What saves us currently here is that in
> >> compute mode we only really traverse the extobj list after a preempt
> >> fence wait, or when a vm is using a new context for the first time. So
> >> vm's extobj list is pretty large. Each bo's vma list will typically be
> >> pretty small.
> > Can I ask why we are optimising for this userspace, this seems
> > incredibly broken.
> >
> > We've has this sort of problem in the past with Intel letting the tail
> > wag the horse, does anyone remember optimising relocations for a
> > userspace that didn't actually need to use relocations?
> >
> > We need to ask why this userspace is doing this, can we get some
> > pointers to it? compute driver should have no reason to use mostly
> > external objects, the OpenCL and level0 APIs should be good enough to
> > figure this out.
>
> Well that is pretty normal use case, AMD works the same way.
>
> In a multi GPU compute stack you have mostly all the data shared between
> different hardware devices.
>
> As I said before looking at just the Vulcan use case is not a good idea
> at all.
>
It's okay, I don't think anyone is doing that, some of the these
use-cases are buried in server land and you guys don't communicate
them very well.
multi-gpu compute would I'd hope be moving towards HMM/SVM type
solutions though?
I'm also not into looking at use-cases that used to be important but
might not as important going forward.
Dave.
> Christian.
>
> >
> > Dave.
>
Powered by blists - more mailing lists