[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c0b9e1d-c990-437c-a8ba-5bb58e5872a0@amd.com>
Date: Thu, 28 Sep 2023 10:44:47 -0400
From: Luben Tuikov <luben.tuikov@....com>
To: Boris Brezillon <boris.brezillon@...labora.com>,
Christian König <christian.koenig@....com>
Cc: Danilo Krummrich <dakr@...hat.com>, airlied@...il.com,
daniel@...ll.ch, matthew.brost@...el.com,
faith.ekstrand@...labora.com, dri-devel@...ts.freedesktop.org,
nouveau@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Donald Robson <Donald.Robson@...tec.com>,
Frank Binns <Frank.Binns@...tec.com>,
Sarah Walker <sarah.walker@...tec.com>
Subject: Re: [PATCH drm-misc-next 1/3] drm/sched: implement dynamic job flow
control
On 2023-09-28 04:02, Boris Brezillon wrote:
> On Wed, 27 Sep 2023 13:54:38 +0200
> Christian König <christian.koenig@....com> wrote:
>
>> Am 26.09.23 um 09:11 schrieb Boris Brezillon:
>>> On Mon, 25 Sep 2023 19:55:21 +0200
>>> Christian König <christian.koenig@....com> wrote:
>>>
>>>> Am 25.09.23 um 14:55 schrieb Boris Brezillon:
>>>>> +The imagination team, who's probably interested too.
>>>>>
>>>>> On Mon, 25 Sep 2023 00:43:06 +0200
>>>>> Danilo Krummrich <dakr@...hat.com> wrote:
>>>>>
>>>>>> Currently, job flow control is implemented simply by limiting the amount
>>>>>> of jobs in flight. Therefore, a scheduler is initialized with a
>>>>>> submission limit that corresponds to a certain amount of jobs.
>>>>>>
>>>>>> This implies that for each job drivers need to account for the maximum
>>>>>> job size possible in order to not overflow the ring buffer.
>>>>>>
>>>>>> However, there are drivers, such as Nouveau, where the job size has a
>>>>>> rather large range. For such drivers it can easily happen that job
>>>>>> submissions not even filling the ring by 1% can block subsequent
>>>>>> submissions, which, in the worst case, can lead to the ring run dry.
>>>>>>
>>>>>> In order to overcome this issue, allow for tracking the actual job size
>>>>>> instead of the amount job jobs. Therefore, add a field to track a job's
>>>>>> submission units, which represents the amount of units a job contributes
>>>>>> to the scheduler's submission limit.
>>>>> As mentioned earlier, this might allow some simplifications in the
>>>>> PowerVR driver where we do flow-control using a dma_fence returned
>>>>> through ->prepare_job(). The only thing that'd be missing is a way to
>>>>> dynamically query the size of a job (a new hook?), instead of having the
>>>>> size fixed at creation time, because PVR jobs embed native fence waits,
>>>>> and the number of native fences will decrease if some of these fences
>>>>> are signalled before ->run_job() is called, thus reducing the job size.
>>>> Exactly that is a little bit questionable since it allows for the device
>>>> to postpone jobs infinitely.
>>>>
>>>> It would be good if the scheduler is able to validate if it's ever able
>>>> to run the job when it is pushed into the entity.
>>> Yes, we do that already. We check that the immutable part of the job
>>> (everything that's not a native fence wait) fits in the ringbuf.
>>
>> Yeah, but thinking more about it there might be really bad side effects.
>> We shouldn't use a callback nor job credits because it might badly
>> influence fairness between entities.
>>
>> In other words when one entity submits always large jobs and another one
>> always small ones then the scheduler would prefer the one which submits
>> the smaller ones because they are easier to fit into the ring buffer.
>
> Yeah, I was assuming SINGLE_ENTITY sched policy here. As soon as we
Right--it's a job-FIFO.
> have a ring buffer that's shared by several entities it becomes tricky
> to be fair if the job sizes are dynamic. In the multi-entity case, the
Right--for the job credit scheme to work, you need to use a job-FIFO
at the DRM scheduler level. (Once the job is received into the hardware,
the firmware/hardware may choose to reorder/parallelize execution of
several pending jobs, but that's beyond the scope of this thread.)
> ->prepare_job()+dma_fence approach addresses the problem, because the
> first job to call ->prepare_job() and add its fence to the list of jobs
> waiting for ringbuf space will also be the first one to be checked when
> some space is freed, and if there's still not enough space, we won't
> test other jobs coming after in the list.
Right--you shouldn't.
>
>>
>> What we can do is the follow:
>> 1. The scheduler has some initial credits it can use to push jobs.
>> 2. Each scheduler fence (and *not* the job) has a credits field of how
>> much it will use.
>
> When are the credits assigned to the scheduler fence? As said earlier,
> on PowerVR, we might start with N credits when the job is queued, and
> (N - M) when it gets submitted, so we need a hook to force a
> recalculation every time the scheduler is considering the job for
> submission.
"Credits" is something the firmware/hardware engineers tell you. It's a
known fixed quantity at ASIC boot. It changes only as you submit jobs
into the hardware/firmware.
No hook, but rather a peek. You'd peek at the hardware to figure out
how many credits you have available to submit new jobs, or you'd keep
a running count of this quantity--depending on how the ASIC works.
When a job completes, you add it's credits to the available credit
count (or you may ask the hardware how many are available now),
and add/reset that amount to the available count kept in the scheduler
struct (for instance). Then, if the next job to be pushed--which has been
known from the outset as we use a job-FIFO--is using less than or equal
number of credits than the available ones, then you push the job, and
subtract from the availability count (or, again, peek at the hardware
for that count).
>
>> 3. After letting a a job run the credits of it's fence are subtracted
>> from the available credits of the scheduler.
>
> Uh, what happens if the job you picked make the scheduler
> available credits pass under zero? I guess that's relying on the
> fact you only expose half of the ring buffer capacity, thus enforcing
> that a job is never bigger than half the ring buffer. The latter is
> acceptable, but the fact your utilization is then half the maximum
> capacity is not great IMHO.
The credit count you keep should never go negative from the action of pushing
jobs to the hardware. If it did, it tells you the software design is not
consistent.
Hardware/firmware engineers will not appreciate the fact that only 1/2 credits
are being exposed due to poor software design principles, nor would
the sales team.
(See also message-id: 61c0d884-b8d4-4109-be75-23927b61cb52@....com.)
>
>> 4. The scheduler can keep running jobs as long as it has a positive
>> credit count.
>
> Why not just check that 'next_job_credits < available_credits', and
Yes, see message-id: 61c0d884-b8d4-4109-be75-23927b61cb52@....com.
> force the scheduler to go to sleep if that's not the case. When it's
> woken up because the parent fence of some previous job is signaled, we
"pending job"
> re-evaluate the condition, and go back to sleep if we still don't have
> enough credits. In the PowerVR case, I'd need a wait to recalculate the
> number of credits every time the condition is re-evaluated, but that's
> just a matter of adding an optional hook to force the re-calculation.
Right.
>
>> 5. When the credit count becomes negative it goes to sleep until a
>> scheduler fence signals and the count becomes positive again.
>>
>> This way jobs are handled equally, you can still push jobs up to at
>> least halve your ring buffer size
>
> I think that's the aspect I'm not fond of. I don't see why we'd want to
> keep half of the ring buffer unused. I mean, there might be good
We don't. We absolutely don't. Hardware engineers would absolutely
not appreciate this, and you shouldn't write the code to do that.
> reasons to do so, if, for instance, the same ring buffer is used for
> some high-priority commands sent by the kernel or something like that.
Ideally, you'd want a separate ring with its own credits for high-priority
jobs, since a high-priority job can be as large as the credit capacity,
which would force the code to insert it at the head of the FIFO. Anyway,
I digress.
> But it looks like a driver-specific decision to not fully use the ring
> buffer.
The full potential of the hardware should be utilized at any point in time.
>
>> and you should be able to handle your
>> PowerVR case by calculating the credits you actually used in your
>> run_job() callback.
>
> Hm, ideally the credits adjustment should happen every time the
> scheduler is considering a job for submission (every time it got
> unblocked because available credits got increased), otherwise you might
> wait longer than strictly needed if some native fences got signaled in
> the meantime.
Ideally, at the time you're considering whether you can push a job to the hardware,
you should have the credit capacity ready--i.e. you should just read it
off a variable/register/etc., possibly atomically. "Calculating" anything
might induce delays, and future temptation to add more code to do more things
there, thus degrading design.
You'd calculate credit capacity when a pending job completes, i.e. returns
back to the scheduler from the hardware.
--
Regards,
Luben
Powered by blists - more mailing lists