[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53CF6746.7010905@amd.com>
Date: Wed, 23 Jul 2014 09:41:58 +0200
From: Christian König <christian.koenig@....com>
To: Maarten Lankhorst <maarten.lankhorst@...onical.com>,
Daniel Vetter <daniel.vetter@...ll.ch>
CC: Christian König <deathsimple@...afone.de>,
"Dave Airlie" <airlied@...il.com>,
Thomas Hellstrom <thellstrom@...are.com>,
nouveau <nouveau@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>,
"Deucher, Alexander" <alexander.deucher@....com>
Subject: Re: [Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation
for fences
Am 23.07.2014 09:32, schrieb Maarten Lankhorst:
> op 23-07-14 09:15, Christian König schreef:
>> Am 23.07.2014 09:09, schrieb Daniel Vetter:
>>> On Wed, Jul 23, 2014 at 9:06 AM, Maarten Lankhorst
>>> <maarten.lankhorst@...onical.com> wrote:
>>>>> Can we somehow avoid the need to call fence_signal() at all? The interrupts at least on radeon are way to unreliable for such a thing. Can enable_signalling fail? What's the reason for fence_signaled() in the first place?
>>>> It doesn't need to be completely reliable, or finish immediately.
>>>>
>>>> And any time wake_up_all(&rdev->fence_queue) is called all the fences that were enabled will be rechecked.
>>> I raised this already somewhere else, but should we have some common
>>> infrastructure in the core fence code to recheck fences periodically?
>>> radeon doesn't seem to be the only hw where this isn't reliable
>>> enough. Of course timer-based rechecking would only work if the driver
>>> provides the fence->signalled callback to recheck actual fence state.
>> Yeah, agree. The proposal won't work reliable at all with radeon.
>>
>> Interrupts are accumulated before sending them to the CPU, e.g. you can get one interrupt for multiple fences finished. If it's just the interrupt for the last fence submitted that gets lost you are completely screwed up because you won't get another interrupt.
>>
>> I had that problem multiple times while working on UVD support, resulting in the driver thinking that it can't submit more jobs because non of the interrupts for the already submitted fence cam through.
> Yeah but all the fences that have .enable_signaling will get signaled from a single interrupt, or when any waiter calls radeon_fence_process.
You still need to check if the fence is really signaled, cause
radeon_fence_process might wakeup the wait queue because of something
completely different.
Apart from that you once again rely on somebody else calling
radeon_fence_process. This will probably work most of the times, but
it's not 100% reliable.
>
>> Apart from that interrupts on Macs usually don't work at all, so we really need a solution where calling fence_signaled() is completely optional.
> I haven't had a problem with interrupts on my mbp after d1f9809ed1315c4cdc5760cf2f59626fd3276952, but it should be trivial to start a timer that periodically does wake_up_all, and gets its timeout reset in a call to radeon_fence_process. It could either be added as a work item, or as a normal timer (disabled during gpu lockup recovery to prevent checks from fiddling with things it shouldn't).
That will probably work, but just again sounds like we force the driver
to fit the fence implementation instead of the other way around.
Why is that fence_signaled() call needed for in the first place?
Christian.
>
> ~Maarten
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists