[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM=9tyf4m6gtUQ0BCraf0gB06_pxXV8gpQQsvWjeJnczmJkQQ@mail.gmail.com>
Date: Wed, 13 Sep 2023 17:05:42 +1000
From: Dave Airlie <airlied@...il.com>
To: Boris Brezillon <boris.brezillon@...labora.com>
Cc: Thomas Hellström
<thomas.hellstrom@...ux.intel.com>,
Danilo Krummrich <dakr@...hat.com>, daniel@...ll.ch,
matthew.brost@...el.com, sarah.walker@...tec.com,
donald.robson@...tec.com, christian.koenig@....com,
faith.ekstrand@...labora.com, dri-devel@...ts.freedesktop.org,
nouveau@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize
dma_resv/extobj handling and GEM validation
On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
<boris.brezillon@...labora.com> wrote:
>
> On Tue, 12 Sep 2023 18:20:32 +0200
> Thomas Hellström <thomas.hellstrom@...ux.intel.com> wrote:
>
> > > +/**
> > > + * get_next_vm_bo_from_list() - get the next vm_bo element
> > > + * @__gpuvm: The GPU VM
> > > + * @__list_name: The name of the list we're iterating on
> > > + * @__local_list: A pointer to the local list used to store already iterated items
> > > + * @__prev_vm_bo: The previous element we got from drm_gpuvm_get_next_cached_vm_bo()
> > > + *
> > > + * This helper is here to provide lockless list iteration. Lockless as in, the
> > > + * iterator releases the lock immediately after picking the first element from
> > > + * the list, so list insertion deletion can happen concurrently.
> >
> > Are the list spinlocks needed for that async state update from within
> > the dma-fence critical section we've discussed previously?
>
> Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
> hook will be in this situation (Panthor at the moment, PowerVR soon). I
> get that Xe and Nouveau don't need that because they update the VM
> state early (in the ioctl path), but I keep thinking this will hurt us
> if we don't think it through from the beginning, because once you've
> set this logic to depend only on resv locks, it will be pretty hard to
> get back to a solution which lets synchronous VM_BINDs take precedence
> on asynchronous request, and, with vkQueueBindSparse() passing external
> deps (plus the fact the VM_BIND queue might be pretty deep), it can
> take a long time to get your synchronous VM_BIND executed...
btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?
it feels like a bit of premature optimisation, but maybe we have use cases.
Dave.
Powered by blists - more mailing lists