[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e517dc90-0289-7339-e36e-54ba2635ed1f@asahilina.net>
Date: Thu, 9 Mar 2023 22:48:37 +0900
From: Asahi Lina <lina@...hilina.net>
To: Christian König <christian.koenig@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Wedson Almeida Filho <wedsonaf@...il.com>,
Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Luben Tuikov <luben.tuikov@....com>,
Jarkko Sakkinen <jarkko@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Alyssa Rosenzweig <alyssa@...enzweig.io>,
Karol Herbst <kherbst@...hat.com>,
Ella Stanforth <ella@...unix.org>,
Faith Ekstrand <faith.ekstrand@...labora.com>,
Mary <mary@...y.zone>, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org,
linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org,
linux-sgx@...r.kernel.org, asahi@...ts.linux.dev
Subject: Re: [PATCH RFC 11/18] drm/scheduler: Clean up jobs when the scheduler
is torn down
On 09/03/2023 20.47, Christian König wrote:
> Am 09.03.23 um 10:43 schrieb Asahi Lina:
>> On 09/03/2023 17.42, Christian König wrote:
>>> Am 08.03.23 um 20:37 schrieb Asahi Lina:
>>>> On 09/03/2023 03.12, Christian König wrote:
>>>>> Am 08.03.23 um 18:32 schrieb Asahi Lina:
>>>>>> [SNIP]
>>>>>> Yes but... none of this cleans up jobs that are already submitted by the
>>>>>> scheduler and in its pending list, with registered completion callbacks,
>>>>>> which were already popped off of the entities.
>>>>>>
>>>>>> *That* is the problem this patch fixes!
>>>>> Ah! Yes that makes more sense now.
>>>>>
>>>>>>> We could add a warning when users of this API doesn't do this
>>>>>>> correctly, but cleaning up incorrect API use is clearly something we
>>>>>>> don't want here.
>>>>>> It is the job of the Rust abstractions to make incorrect API use that
>>>>>> leads to memory unsafety impossible. So even if you don't want that in
>>>>>> C, it's my job to do that for Rust... and right now, I just can't
>>>>>> because drm_sched doesn't provide an API that can be safely wrapped
>>>>>> without weird bits of babysitting functionality on top (like tracking
>>>>>> jobs outside or awkwardly making jobs hold a reference to the scheduler
>>>>>> and defer dropping it to another thread).
>>>>> Yeah, that was discussed before but rejected.
>>>>>
>>>>> The argument was that upper layer needs to wait for the hw to become
>>>>> idle before the scheduler can be destroyed anyway.
>>>> Unfortunately, that's not a requirement you can encode in the Rust type
>>>> system easily as far as I know, and Rust safety rules mean we need to
>>>> make it safe even if the upper layer doesn't do this... (or else we have
>>>> to mark the entire drm_sched abstraction unsafe, but that would be a pity).
>>> Yeah, that should really not be something we should do.
>>>
>>> But you could make the scheduler depend on your fw context object, don't
>>> you?
>> Yes, and that would fix the problem for this driver, but it wouldn't
>> make the abstraction safe. The thing is we have to make it *impossible*
>> to misuse drm_sched in such a way that it crashes, at the Rust
>> abstraction level. If we start depending on the driver following rules
>> like that, that means the drm_sched abstraction has to be marked unsafe.
>>
>>> Detaching the scheduler from the underlying hw fences is certainly
>>> possible, but we removed that functionality because some people people
>>> tried to force push some Windows recovery module into Linux. We are in
>>> the process of reverting that and cleaning things up once more, but that
>>> will take a while.
>> Okay, but I don't see why that should block the Rust abstractions...
>
> Because even with removing the fence callback this is inherently unsafe.
>
> You not only need to remove the callback, but also make sure that no
> parallel timeout handling is running.
If by that you mean that the timeout handling functions aren't being
called by the driver, then that's implied. If the scheduler is being
dropped, by definition there are no references left to call into the
scheduler directly from the Rust side. So we only need to worry about
what drm_sched itself does.
Right now the cleanup function tears down the timeout work at the end,
but it probably makes sense to do it at the start? Then if we do that
and stop the kthread, we can be really sure nothing else is accessing
the scheduler and we can clean up without taking any locks:
Roughly:
void drm_sched_fini(struct drm_gpu_scheduler *sched)
{
sched->ready = false; /* Should probably do this first? */
kthread_stop(sched->thread);
cancel_delayed_work_sync(&sched->work_tdr);
/* Clean up the pending_list here */
}
I'm also not sure what the rest of the drm_sched_fini() function is
doing right now. It's going through all entities and removing them, and
then wakes up entities stuck in drm_sched_entity_flush()... but didn't
we just agree that the API requires users to tear down entities before
tearing down the scheduler anyway?
~~ Lina
Powered by blists - more mailing lists