[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53CE56ED.4040109@vodafone.de>
Date: Tue, 22 Jul 2014 14:19:57 +0200
From: Christian König <deathsimple@...afone.de>
To: Christian König <christian.koenig@....com>,
Dave Airlie <airlied@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...onical.com>,
Thomas Hellstrom <thellstrom@...are.com>,
nouveau <nouveau@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>,
"Deucher, Alexander" <alexander.deucher@....com>
Subject: Re: [PATCH 09/17] drm/radeon: use common fence implementation for
fences
Am 22.07.2014 13:57, schrieb Daniel Vetter:
> On Tue, Jul 22, 2014 at 01:46:07PM +0200, Daniel Vetter wrote:
>> On Tue, Jul 22, 2014 at 10:43:13AM +0200, Christian König wrote:
>>> Am 22.07.2014 06:05, schrieb Dave Airlie:
>>>> On 9 July 2014 22:29, Maarten Lankhorst <maarten.lankhorst@...onical.com> wrote:
>>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@...onical.com>
>>>>> ---
>>>>> drivers/gpu/drm/radeon/radeon.h | 15 +-
>>>>> drivers/gpu/drm/radeon/radeon_device.c | 60 ++++++++-
>>>>> drivers/gpu/drm/radeon/radeon_fence.c | 223 ++++++++++++++++++++++++++------
>>>>> 3 files changed, 248 insertions(+), 50 deletions(-)
>>>>>
>>>> From what I can see this is still suffering from the problem that we
>>>> need to find a proper solution to,
>>>>
>>>> My summary of the issues after talking to Jerome and Ben and
>>>> re-reading things is:
>>>>
>>>> We really need to work out a better interface into the drivers to be
>>>> able to avoid random atomic entrypoints,
>>> Which is exactly what I criticized from the very first beginning. Good to
>>> know that I'm not the only one thinking that this isn't such a good idea.
>> I guess I've lost context a bit, but which atomic entry point are we
>> talking about? Afaics the only one that's mandatory is the is
>> fence->signaled callback to check whether a fence really has been
>> signalled. It's used internally by the fence code to avoid spurious
>> wakeups. Afaik that should be doable already on any hardware. If that's
>> not the case then we can always track the signalled state in software and
>> double-check in a worker thread before updating the sw state. And wrap
>> this all up into a special fence class if there's more than one driver
>> needing this.
> One thing I've forgotten: The i915 scheduler that's floating around runs
> its bottom half from irq context. So I really want to be able to check
> fence state from irq context and I also want to make it possible
> (possible! not mandatory) to register callbacks which are run from any
> context asap after the fence is signalled.
NAK, that's just the bad design I've talked about. Checking fence state
inside the same driver from interrupt context is OK, because it's the
drivers interrupt that we are talking about here.
Checking fence status from another drivers interrupt context is what
really concerns me here, cause your driver doesn't have the slightest
idea if the called driver is really capable of checking the fence right now.
> If the radeon hw/driver doesn't want to cope with that complexity we can
> fully insolate it with the sw tracked fence state if you don't like
> Maarten's radeon implementation. But forcing everyone to forgoe this just
> because you don't like it and don't want to use it in radeon doesn't sound
> right.
While it's clearly a hack Maarten's solution for radeon would indeed
work, but that's not really the point here.
It's just that I think leaking interrupt context from one driver into
another driver is just a really really bad idea from a design point of view.
And calling into a driver while in atomic context to check for a fence
being signaled doesn't sounds like a good idea either, cause that limits
way to much what the called driver can do for checking the status of a
fence.
Christian.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists