[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201007103653.qjohhta7douhlb22@e107158-lin.cambridge.arm.com>
Date: Wed, 7 Oct 2020 11:36:53 +0100
From: Qais Yousef <qais.yousef@....com>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel <dri-devel@...ts.freedesktop.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Tejun Heo <tj@...nel.org>, Tim Murray <timmurray@...gle.com>,
Daniel Vetter <daniel@...ll.ch>,
Rob Clark <robdclark@...omium.org>,
open list <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Subject: Re: [PATCH v2 0/3] drm: commit_work scheduling
On 10/06/20 13:04, Rob Clark wrote:
> On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef <qais.yousef@....com> wrote:
> >
> > On 10/05/20 16:24, Rob Clark wrote:
> >
> > [...]
> >
> > > > RT planning and partitioning is not easy task for sure. You might want to
> > > > consider using affinities too to get stronger guarantees for some tasks and
> > > > prevent cross-talking.
> > >
> > > There is some cgroup stuff that is pinning SF and some other stuff to
> > > the small cores, fwiw.. I think the reasoning is that they shouldn't
> > > be doing anything heavy enough to need the big cores.
> >
> > Ah, so you're on big.LITTLE type of system. I have done some work which enables
> > biasing RT tasks towards big cores and control the default boost value if you
> > have util_clamp and schedutil enabled. You can use util_clamp in general to
> > help with DVFS related response time delays.
> >
> > I haven't done any work to try our best to pick a small core first but fallback
> > to big if there's no other alternative.
> >
> > It'd be interesting to know how often you end up on a big core if you remove
> > the affinity. The RT scheduler picks the first cpu in the lowest priority mask.
> > So it should have this bias towards picking smaller cores first if they're
> > in the lower priority mask (ie: not running higher priority RT tasks).
>
> fwiw, the issue I'm looking at is actually at the opposite end of the
> spectrum, less demanding apps that let cpus throttle down to low
> OPPs.. which stretches out the time taken at each step in the path
> towards screen (which seems to improve the odds that we hit priority
> inversion scenarios with SCHED_FIFO things stomping on important CFS
> things)
So you do have the problem of RT task preempting an important CFS task.
>
> There is a *big* difference in # of cpu cycles per frame between
> highest and lowest OPP..
To combat DVFS related delays, you can use util clamp.
Hopefully this article helps explain it if you didn't come across it before
https://lwn.net/Articles/762043/
You can use sched_setattr() to set SCHED_FLAG_UTIL_CLAMP_MIN for a task. This
will guarantee everytime this task is running it'll appear it has at least
this utilization value, so schedutil governor (which must be used for this to
work) will pick up the right performance point (OPP).
The scheduler will try its best to make sure that the task will run on a core
that meets the minimum requested performance point (hinted by setting
uclamp_min).
Thanks
--
Qais Yousef
Powered by blists - more mailing lists