[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCcLMhS5kyD60PEX@pollux>
Date: Fri, 16 May 2025 11:53:54 +0200
From: Danilo Krummrich <dakr@...nel.org>
To: Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
Cc: Philipp Stanner <phasta@...nel.org>, Lyude Paul <lyude@...hat.com>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Matthew Brost <matthew.brost@...el.com>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, Danilo Krummrich <dakr@...hat.com>
Subject: Re: [PATCH v2 2/6] drm/sched: Prevent teardown waitque from blocking
too long
On Fri, May 16, 2025 at 10:33:30AM +0100, Tvrtko Ursulin wrote:
> On 24/04/2025 10:55, Philipp Stanner wrote:
> > + * @kill_fence_context: kill the fence context belonging to this scheduler
>
> Which fence context would that be? ;)
There's one one per ring and a scheduler instance represents a single ring. So,
what should be specified here?
> Also, "fence context" would be a new terminology in gpu_scheduler.h API
> level. You could call it ->sched_fini() or similar to signify at which point
> in the API it gets called and then the fact it takes sched as parameter
> would be natural.
The driver should tear down the fence context in this callback, not the while
scheduler. ->sched_fini() would hence be misleading.
> We also probably want some commentary on the topic of indefinite (or very
> long at least) blocking a thread exit / SIGINT/TERM/KILL time.
You mean in case the driver does implement the callback, but does *not* properly
tear down the fence context? So, you ask for describing potential consequences
of drivers having bugs in the implementation of the callback? Or something else?
> Is the idea to let drivers shoot themselves in the foot or what?
Please abstain from such rhetorical questions, that's not a good way of having
technical discussions.
- Danilo
Powered by blists - more mailing lists