[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53CF8010.9060809@amd.com>
Date: Wed, 23 Jul 2014 11:27:44 +0200
From: Christian König <christian.koenig@....com>
To: Daniel Vetter <daniel.vetter@...ll.ch>,
Christian König
<deathsimple@...afone.de>
CC: Maarten Lankhorst <maarten.lankhorst@...onical.com>,
Thomas Hellstrom <thellstrom@...are.com>,
nouveau <nouveau@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>,
"Deucher, Alexander" <alexander.deucher@....com>
Subject: Re: [Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation
for fences
Am 23.07.2014 10:54, schrieb Daniel Vetter:
> On Wed, Jul 23, 2014 at 10:46 AM, Christian König
> <deathsimple@...afone.de> wrote:
>> Am 23.07.2014 10:42, schrieb Daniel Vetter:
>>
>>> On Wed, Jul 23, 2014 at 10:25 AM, Maarten Lankhorst
>>> <maarten.lankhorst@...onical.com> wrote:
>>>> In this case if the sync was to i915 the i915 lockup procedure would take
>>>> care of itself. It wouldn't fix radeon, but it would at least unblock your
>>>> intel card again. I haven't specifically added a special case to attempt to
>>>> unblock external fences, but I've considered it. :-)
>>> Actually the i915 reset stuff relies crucially on being able to kick
>>> all waiters holding driver locks. Since the current fence code only
>>> exposes an opaque wait function without exposing the underlying wait
>>> queue we won't be able to sleep on both the fence queue and the reset
>>> queue. So would pose a problem if we add fence_wait calls to our
>>> driver.
>>
>> And apart from that I really think that I misunderstood Maarten. But his
>> explanation sounds like i915 would do a reset because Radeon is locked up,
>> right?
>>
>> Well if that's really the case then I would question the interface even
>> more, cause that is really nonsense.
> I disagree - the entire point of fences is that we can do multi-gpu
> work asynchronously. So by the time we'll notice that radeon's dead we
> have accepted the batch from userspace already. The only way to get
> rid of it again is through our reset machinery, which also tells
> userspace that we couldn't execute the batch. Whether we actually need
> to do a hw reset depends upon whether we've committed the batch to the
> hw already. Atm that's always the case, but the scheduler will change
> that. So I have no issues with intel doing a reset when other drivers
> don't signal fences.
You submit a job to the hardware and then block the job to wait for
radeon to be finished? Well than this would indeed require a hardware
reset, but wouldn't that make the whole problem even worse?
I mean currently we block one userspace process to wait for other
hardware to be finished with a buffer, but what you are describing here
blocks the whole hardware to wait for other hardware which in the end
blocks all userspace process accessing the hardware.
Talking about alternative approaches wouldn't it be simpler to just
offload the waiting to a different kernel or userspace thread?
Christian.
>
> Also this isn't a problem with the interface really, but with the
> current implementation for radeon. And getting cross-driver reset
> notifications right will require more work either way.
> -Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists