lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZT/uPhLK7gan61Ns@pollux>
Date:   Mon, 30 Oct 2023 18:56:14 +0100
From:   Danilo Krummrich <dakr@...hat.com>
To:     Christian König <christian.koenig@....com>
Cc:     Boris Brezillon <boris.brezillon@...labora.com>, airlied@...il.com,
        daniel@...ll.ch, matthew.brost@...el.com, faith@...strand.net,
        luben.tuikov@....com, dri-devel@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH drm-misc-next v3] drm/sched: implement dynamic job-flow
 control

Hi Christian,

On Mon, Oct 30, 2023 at 08:38:45AM +0100, Christian König wrote:
> Hi Boris,
> 
> Am 27.10.23 um 12:17 schrieb Boris Brezillon:
> > Hi Christian,
> > 
> > On Fri, 27 Oct 2023 11:06:44 +0200
> > Christian König <christian.koenig@....com> wrote:
> > 
> > > Am 27.10.23 um 10:22 schrieb Boris Brezillon:
> > > > On Fri, 27 Oct 2023 09:44:13 +0200
> > > > Christian König<christian.koenig@....com>  wrote:
> > > > > Am 27.10.23 um 09:39 schrieb Boris Brezillon:
> > > > > > On Fri, 27 Oct 2023 09:35:01 +0200
> > > > > > Christian König<christian.koenig@....com>   wrote:
> > > > > > > Am 27.10.23 um 09:32 schrieb Boris Brezillon:
> > > > > > > > On Fri, 27 Oct 2023 09:22:12 +0200
> > > > > > > > Christian König<christian.koenig@....com>   wrote:
> > > > > > > > > > +
> > > > > > > > > > +	/**
> > > > > > > > > > +	 * @update_job_credits: Called once the scheduler is considering this
> > > > > > > > > > +	 * job for execution.
> > > > > > > > > > +	 *
> > > > > > > > > > +	 * Drivers may use this to update the job's submission credits, which is
> > > > > > > > > > +	 * useful to e.g. deduct the number of native fences which have been
> > > > > > > > > > +	 * signaled meanwhile.
> > > > > > > > > > +	 *
> > > > > > > > > > +	 * The callback must either return the new number of submission credits
> > > > > > > > > > +	 * for the given job, or zero if no update is required.
> > > > > > > > > > +	 *
> > > > > > > > > > +	 * This callback is optional.
> > > > > > > > > > +	 */
> > > > > > > > > > +	u32 (*update_job_credits)(struct drm_sched_job *sched_job);
> > > > > > > > > Why do we need an extra callback for this?
> > > > > > > > > 
> > > > > > > > > Just document that prepare_job() is allowed to reduce the number of
> > > > > > > > > credits the job might need.
> > > > > > > > ->prepare_job() is called only once if the returned fence is NULL, but
> > > > > > > > we need this credit-update to happen every time a job is considered for
> > > > > > > > execution by the scheduler.
> > > > > > > But the job is only considered for execution once. How do you see that
> > > > > > > this is called multiple times?
> > > > > > Nope, it's not. If drm_sched_can_queue() returns false, the scheduler
> > > > > > will go look for another entity that has a job ready for execution, and
> > > > > > get back to this entity later, and test drm_sched_can_queue() again.
> > > > > > Basically, any time drm_sched_can_queue() is called, the job credits
> > > > > > update should happen, so we have an accurate view of how many credits
> > > > > > this job needs.
> > > > > Well, that is the handling which I already rejected because it creates
> > > > > unfairness between processes. When you consider the credits needed
> > > > > *before* scheduling jobs with a lower credit count are always preferred
> > > > > over jobs with a higher credit count.
> > > > My bad, it doesn't pick another entity when an entity with a
> > > > ready job that doesn't fit the queue is found, it just bails out from
> > > > drm_sched_rq_select_entity_rr() and returns NULL (AKA: no ready entity
> > > > found). But we still want to update the job credits before checking if
> > > > the job fits or not (next time this entity is tested).
> > > Why? I only see a few possibility here:
> > > 
> > > 1. You need to wait for submissions to the same scheduler to finish
> > > before the one you want to push to the ring.
> > >       This case can be avoided by trying to cast the dependency fences to
> > > drm_sched_fences and looking if they are already scheduled.
> > > 
> > > 2. You need to wait for submissions to a different scheduler instance
> > > and in this case you should probably return that as dependency instead.
> > It's already described as a dependency, but native dependency waits are
> > deferred to the FW (we wait on scheduled to run the job, not finished).
> > The thing is, after ->prepare_job() returned NULL (no non-native deps
> > remaining), and before ->run_job() gets called, there might be several
> > of these native deps that get signaled, and we're still considering
> > job->submission_credits as unchanged, when each of these signalled
> > fence could have reduced the job credits, potentially allowing it to be
> > submitted sooner.
> 
> Ah, ok that at least clears up your intentions here.
> 
> Question is if that is really that important for you? I mean you just seem
> to fill up more of the ring buffer.
> 
> > 
> > > So to me it looks like when prepare_job() is called because it is
> > > selected as next job for submission you should already know how many
> > > credits it needs.
> > You know how many credits it needs when ->prepare_job() is called, but
> > if this number is too high, the entity will not be picked, and next
> > time it's checked, you'll still consider the job credits at the time
> > ->prepare_job() was called, which might differ from the current job
> > credits (signalled native fences might have been signalled in the
> > meantime, and they could be evicted).
> > 
> > > > > What you can do is to look at the credits of a job *after* it was picked
> > > > > up for scheduling so that you can scheduler more jobs.
> > > > Sure, but then you might further delay your job if something made it
> > > > smaller (ie. native fences got signaled) between ->prepare_job() and
> > > > drm_sched_can_queue(). And any new drm_sched_can_queue() test would
> > > > just see the old credits value.
> > > > 
> > > > Out of curiosity, what are you worried about with this optional
> > > > ->update_job_credits() call in the drm_sched_can_queue() path? Is the
> > > > if (sched->update_job_credits) overhead considered too high for drivers
> > > > that don't need it?
> > > Since the dma_fences are also used for resource management the scheduler
> > > is vital for correct system operation.
> > > 
> > > We had massively problems because people tried to over-optimize the
> > > dma_fence handling which lead to very hard to narrow down memory
> > > corruptions.
> > > 
> > > So for every change you come up here you need to have a very very good
> > > justification. And just saving a bit size of your ring buffer is
> > > certainly not one of them.
> > I fail to see how calling ->update_job_credits() changes the scheduler
> > behavior or how it relates to the sort memory corruption you're
> > referring to.
> 
> Yeah, you are right that's not even remotely related.
> 
> > And yes, we can live with the overhead of making jobs
> > slightly bigger than they actually are, thus potentially delaying their
> > execution. It's just that I don't quite understand the rational behind
> > this conservatism, as I don't really see what negative impact this extra
> > ->update_job_credits() call in the credit checking path has, other than
> > the slight overhead of an if-check for drivers that don't need it.
> 
> From experience it showed that we should not make the scheduler more
> complicated than necessary. And I still think that the ring buffers only
> need to be filled enough to keep the hardware busy.

Right, and this callback contributes to exactly that.

I don't really think there is much to worry about in terms of introducing more
complexity. The implementation behind this callback is fairly trivial - it's
simply called right before we check whether a job fits on the ring, to fetch
the job's actual size.

I would agree if the implementation of that would be bulky, tricky and asking
for a compromise. But, since it actually is simple and straight forward I really
think that if we implement some kind of dynamic job-flow control it should be
implemented as acurate as possible rather than doing it half-baked.

> 
> If this here has some measurable positive effect then yeah we should
> probably do it, but as long as it's only nice to have I have some objections
> to that.

Can't answer this, since Nouveau doesn't support native fence waits. However, I
guess it depends on how many native fences a job carries. So, if we'd have two
jobs with each of them carrying a lot of native fences, but not a lot of actual
work, I can very well imagine that over-accounting can have a measureable
impact.

I just wonder if we really want to ask for real measurements given that the
optimization is rather trivial and cheap and we already have enough evidence
that it makes sense conceptually.

- Danilo

> 
> Regards,
> Christian.
> 
> > 
> > Regards,
> > 
> > Boris
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ