[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <537A08BC.7060501@canonical.com>
Date: Mon, 19 May 2014 15:35:56 +0200
From: Maarten Lankhorst <maarten.lankhorst@...onical.com>
To: Christian König <deathsimple@...afone.de>,
airlied@...ux.ie
CC: nouveau@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org
Subject: Re: [RFC PATCH v1 08/16] drm/radeon: use common fence implementation
for fences
op 19-05-14 14:30, Christian König schreef:
> Am 19.05.2014 12:10, schrieb Maarten Lankhorst:
>> op 19-05-14 10:27, Christian König schreef:
>>> Am 19.05.2014 10:00, schrieb Maarten Lankhorst:
>>> [SNIP]
>>> The problem here is that the whole approach collides with the way we do reset handling from a conceptual point of view. Every IOCTL or other call chain into the driver is protected by the read side of the exclusive_lock semaphore. So in the case of a GPU lockup we can take the write side of the semaphore and so make sure that we have nobody else accessing the hardware or internal driver structures only changed at init time.
>>>
>>> Leaking a drivers IRQ context into another driver as well as calling into a driver in atomic context is just a quite uncommon approach and should be considered very carefully.
>>>
>>> I would rather vote for a completely synchronous interface only allowing blocking waits and checks if a fence is signaled from not atomic context.
>>>
>>> If a driver needs to avoid blocking it should just use a workqueue and checking a fence outside your own driver is probably be better done in a bottom halve handler anyway.
>>
>> Except that you might want to do something like
>> fence_is_signaled() in another driver to check whether you need to
>> defer, or can submit the batch buffer immediately, saving a bunch of
>> context switches. Running the is_signaled atomic is really useful here
>> because it means you can't do too many scary things in your is_signaled
>> handler.
>
> This is indeed a nice optimization, but nothing more. If you want to provide a is_signalled interface for atomic context then this should be optional, not mandatory.
See below.
>> In case of enable_signaling it was the only sane solution, because
>> fence_signal can be called from irq context, and any calls after that to
>> fence_add_callback and fence_wait aren't allowed to do anything, so
>> fence_enable_sw_signaling and the default wait implementation must be
>> atomic. fence_wait itself doesn't have to be, so it's easy to grab
>> exclusive_lock there.
>
> I don't think you understood my point here: Completely drop enable_signaling, it's unnecessary and only complicates the interface.
>
> We purposely avoided exactly this paradigm in the past and I haven't seen any good argument to start with it now.
In the common case a lot more fences will be emitted than will be waited on.
This means it makes sense to delay signaling a fence with fence_signal for
as long as possible. But when a fence user wants to work with a fence
some way is needed to ensure that the fence will complete. This is the idea
behind .enable_signaling, it tells the fence driver to call fence_signal on
the fence 'soon' because there are now waiters for it.
The atomic .signaled is optional, and can be set to NULL, but there is
no guarantee that fence_is_signaled will ever return true in that case,
unless fence_enable_sw_signaling is called (which calls .enable_signaling).
Providing a custom wait function is optional in the interface, if the default wait
function is used all waiters are signaled when fence_signal is called.
Removing enable_signaling would only make sense if fence_signal was removed too,
but that would mean that fence_is_signaled could no longer exist in the core fence
code, and would mean completely rewriting the interface.
~Maarten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists