[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53CF8693.1040006@canonical.com>
Date: Wed, 23 Jul 2014 11:55:31 +0200
From: Maarten Lankhorst <maarten.lankhorst@...onical.com>
To: Christian König <deathsimple@...afone.de>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Christian König <christian.koenig@....com>
CC: Thomas Hellstrom <thellstrom@...are.com>,
nouveau <nouveau@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>,
"Deucher, Alexander" <alexander.deucher@....com>
Subject: Re: [Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation
for fences
op 23-07-14 11:47, Christian König schreef:
> Am 23.07.2014 11:44, schrieb Daniel Vetter:
>> On Wed, Jul 23, 2014 at 11:39 AM, Daniel Vetter <daniel.vetter@...ll.ch> wrote:
>>> The scheduler needs to keep track of a lot of fences, so I think we'll
>>> have to register callbacks, not a simple wait function. We must keep
>>> track of all the non-i915 fences for all oustanding batches. Also, the
>>> scheduler doesn't eliminate the hw queue, only keep it much slower so
>>> that we can sneak in higher priority things.
>>>
>>> Really, scheduler or not is orthogonal.
>> Also see my other comment about interactions between wait_fence and
>> the i915 reset logic. We can't actually use it from within the
>> scheduler code since that would deadlock.
>
> Yeah, I see. You would need some way to abort the waiting on other devices fences in case of a lockup.
>
> What about an userspace thread to offload waiting and command submission to?
You would still need enable_signaling, else polling on the dma-buf wouldn't work. ;-)
Can't wait synchronously with multiple shared fences, need to poll for that.
And the dma-buf would still have fences belonging to both drivers, and it would still call from outside the driver.
~Maarten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists