lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0ae2d411c21e799491244fe49880a4acca32918.camel@mailbox.org>
Date: Tue, 22 Apr 2025 16:16:48 +0200
From: Philipp Stanner <phasta@...lbox.org>
To: Danilo Krummrich <dakr@...nel.org>, Tvrtko Ursulin
	 <tvrtko.ursulin@...lia.com>
Cc: phasta@...nel.org, Lyude Paul <lyude@...hat.com>, David Airlie
 <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Matthew Brost
 <matthew.brost@...el.com>, Christian König
 <ckoenig.leichtzumerken@...il.com>, Maarten Lankhorst
 <maarten.lankhorst@...ux.intel.com>, Maxime Ripard <mripard@...nel.org>, 
 Thomas Zimmermann <tzimmermann@...e.de>, dri-devel@...ts.freedesktop.org,
 nouveau@...ts.freedesktop.org,  linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] drm/sched: Warn if pending list is not empty

On Tue, 2025-04-22 at 16:08 +0200, Danilo Krummrich wrote:
> On Tue, Apr 22, 2025 at 02:39:21PM +0100, Tvrtko Ursulin wrote:
> > 
> > On 22/04/2025 13:32, Danilo Krummrich wrote:
> > > On Tue, Apr 22, 2025 at 01:07:47PM +0100, Tvrtko Ursulin wrote:
> > > > 
> > > > On 22/04/2025 12:13, Danilo Krummrich wrote:
> > > > > On Tue, Apr 22, 2025 at 11:39:11AM +0100, Tvrtko Ursulin
> > > > > wrote:
> > > > > > Question I raised is if there are other drivers which
> > > > > > manage to clean up
> > > > > > everything correctly (like the mock scheduler does), but
> > > > > > trigger that
> > > > > > warning. Maybe there are not and maybe mock scheduler is
> > > > > > the only false
> > > > > > positive.
> > > > > 
> > > > > So far the scheduler simply does not give any guideline on
> > > > > how to address the
> > > > > problem, hence every driver simply does something (or
> > > > > nothing, effectively
> > > > > ignoring the problem). This is what we want to fix.
> > > > > 
> > > > > The mock scheduler keeps it's own list of pending jobs and on
> > > > > tear down stops
> > > > > the scheduler's workqueues, traverses it's own list and
> > > > > eventually frees the
> > > > > pending jobs without updating the scheduler's internal
> > > > > pending list.
> > > > > 
> > > > > So yes, it does avoid memory leaks, but it also leaves the
> > > > > schedulers internal
> > > > > structures with an invalid state, i.e. the pending list of
> > > > > the scheduler has
> > > > > pointers to already freed memory.
> > > > > 
> > > > > What if the drm_sched_fini() starts touching the pending
> > > > > list? Then you'd end up
> > > > > with UAF bugs with this implementation. We cannot invalidate
> > > > > the schedulers
> > > > > internal structures and yet call scheduler functions - e.g.
> > > > > drm_sched_fini() -
> > > > > subsequently.
> > > > > 
> > > > > Hence, the current implementation of the mock scheduler is
> > > > > fundamentally flawed.
> > > > > And so would be *every* driver that still has entries within
> > > > > the scheduler's
> > > > > pending list.
> > > > > 
> > > > > This is not a false positive, it already caught a real bug --
> > > > > in the mock
> > > > > scheduler.
> > > > 
> > > > To avoid furher splitting hairs on whether real bugs need to be
> > > > able to
> > > > manifest or not, lets move past this with a conclusion that
> > > > there are two
> > > > potential things to do here:
> > > 
> > > This is not about splitting hairs, it is about understanding that
> > > abusing
> > > knowledge about internals of a component to clean things up is
> > > *never* valid.
> > > 
> > > > First one is to either send separately or include in this
> > > > series something
> > > > like:
> > > > 
> > > > diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> > > > b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> > > > index f999c8859cf7..7c4df0e890ac 100644
> > > > --- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> > > > +++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> > > > @@ -300,6 +300,8 @@ void drm_mock_sched_fini(struct
> > > > drm_mock_scheduler
> > > > *sched)
> > > >                  drm_mock_sched_job_complete(job);
> > > >          spin_unlock_irqrestore(&sched->lock, flags);
> > > > 
> > > > +       drm_sched_fini(&sched->base);
> > > > +
> > > >          /*
> > > >           * Free completed jobs and jobs not yet processed by
> > > > the DRM
> > > > scheduler
> > > >           * free worker.
> > > > @@ -311,8 +313,6 @@ void drm_mock_sched_fini(struct
> > > > drm_mock_scheduler
> > > > *sched)
> > > > 
> > > >          list_for_each_entry_safe(job, next, &list, link)
> > > >                  mock_sched_free_job(&job->base);
> > > > -
> > > > -       drm_sched_fini(&sched->base);
> > > >   }
> > > > 
> > > >   /**
> > > > 
> > > > That should satisfy the requirement to "clear" memory about to
> > > > be freed and
> > > > be 100% compliant with drm_sched_fini() kerneldoc (guideline
> > > > b).
> > > > 
> > > > But the new warning from 3/5 here will still be there AFAICT
> > > > and would you
> > > > then agree it is a false positive?
> > > 
> > > No, I do not agree.
> > > 
> > > Even if a driver does what you describe it is not the correct
> > > thing to do and
> > > having a warning call it out makes sense.
> > > 
> > > This way of cleaning things up entirely relies on knowing
> > > specific scheduler
> > > internals, which if changed, may fall apart.
> > > 
> > > > Secondly, the series should modify all drivers (including the
> > > > unit tests)
> > > > which are known to trigger this false positive.
> > > 
> > > Again, there are no false positives. It is the scheduler that
> > > needs to call
> > > free_job() and other potential cleanups. You can't just stop the
> > > scheduler,
> > > leave it in an intermediate state and try to clean it up by hand
> > > relying on
> > > knowledge about internals.
> > 
> > Sorry I don't see the argument for the claim it is relying on the
> > internals
> > with the re-positioned drm_sched_fini call. In that case it is
> > fully
> > compliant with:
> > 
> > /**
> >  * drm_sched_fini - Destroy a gpu scheduler
> >  *
> >  * @sched: scheduler instance
> >  *
> >  * Tears down and cleans up the scheduler.
> >  *
> >  * This stops submission of new jobs to the hardware through
> >  * drm_sched_backend_ops.run_job(). Consequently,
> > drm_sched_backend_ops.free_job()
> >  * will not be called for all jobs still in
> > drm_gpu_scheduler.pending_list.
> >  * There is no solution for this currently. Thus, it is up to the
> > driver to
> > make
> >  * sure that:
> >  *
> >  *  a) drm_sched_fini() is only called after for all submitted jobs
> >  *     drm_sched_backend_ops.free_job() has been called or that
> >  *  b) the jobs for which drm_sched_backend_ops.free_job() has not
> > been
> > called
> >  *
> >  * FIXME: Take care of the above problem and prevent this function
> > from
> > leaking
> >  * the jobs in drm_gpu_scheduler.pending_list under any
> > circumstances.
> > 
> > ^^^ recommended solution b).
> 
> This has been introduced recently with commit baf4afc58314
> ("drm/sched: Improve
> teardown documentation") and I do not agree with this. The scheduler
> should
> *not* make any promises about implementation details to enable
> drivers to abuse
> their knowledge about component internals.
> 
> This makes the problem *worse* as it encourages drivers to rely on
> implementation details, making maintainability of the scheduler even
> worse.
> 
> For instance, what if I change the scheduler implementation, such
> that for every
> entry in the pending_list the scheduler allocates another internal
> object for
> ${something}? Then drivers would already fall apart leaking those
> internal
> objects.
> 
> Now, obviously that's pretty unlikely, but I assume you get the idea.
> 
> The b) paragraph in drm_sched_fini() should be removed for the given
> reasons.
> 
> AFAICS, since the introduction of this commit, driver implementations
> haven't
> changed in this regard, hence we should be good.
> 
> So, for me this doesn't change the fact that every driver
> implementation that
> just stops the scheduler at an arbitrary point of time and tries to
> clean things
> up manually relying on knowledge about component internals is broken.

To elaborate on that, this documentation has been written so that we at
least have *some* documentation about the problem, instead of just
letting new drivers run into the knife.

The commit explicitly introduced the FIXME, marking those two hacky
workarounds as undesirable.

But back then we couldn't fix the problem quickly, so it was either
document the issue at least a bit, or leave it completely undocumented.

P.

> 
> However, this doesn't mean we can't do a brief audit.
> 
> - Danilo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ