[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221010144650.fjwhjdbqqaxz4sow@wubuntu>
Date: Mon, 10 Oct 2022 15:46:50 +0100
From: Qais Yousef <qais.yousef@....com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Youssef Esmat <youssefesmat@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>, juri.lelli@...hat.com,
vincent.guittot@...aro.org,
Dietmar Eggemann <dietmar.eggemann@....com>,
Thomas Gleixner <tglx@...utronix.de>, bristot@...hat.com,
clark.williams@...il.com, bigeasy@...utronix.de,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: Sum of weights idea for CFS PI
On 10/08/22 11:04, Joel Fernandes wrote:
>
>
> > On Oct 6, 2022, at 3:40 PM, Youssef Esmat <youssefesmat@...gle.com> wrote:
> >
> [..]
> >>
> >>> Anyway - just trying to explain how I see it and why C is unlikely to be
> >>> taking too much time. I could be wrong. As Youssef said, I think there's
> >>> no fundamental problem here.
> >>
> >> I know on Android where they use smaller HZ, the large tick causes lots of
> >> problems for large nice deltas. Example if a highly niced task was to be
> >> preempted for 1ms, and preempts instead at 3ms, then the less-niced task
> >> will not be so nice (even less nice than it promised to be) any more
> >> because of the 2ms boost that the higher niced task got. This can lead the
> >> the sched_latency thrown out of the window. Not adjusting the weights
> >> properly can potentially make that problem much worse IMO.
> >
> > Once C releases the lock it should get adjusted and A will get adjusted
> > also regardless of tick. At the point we adjust the weights we have
> > a chance to check for preemption and cause a reschedule.
>
> Yes but the lock can be held for potentially long time (and even user space
> lock). I’m more comfortable with Peter’s PE patch which seems a more generic
> solution, than sum of weights if we can get it working. I’m studying Connor’s
> patch set now…
The 2 solutions are equivalent AFAICT.
With summation:
A , B , C , D
sleeping, running, running, running
- , 1/5 , 3/5 , 1/5
Where we'll treat A as running but donate its bandwidth to C, the mutex owner.
With PE:
A , B , C , D
running, running, running, running
2/5 , 1/5 , 1/5 , 1/5
Where A will donate its execution context to C, the mutex owner.
In both cases we should end up with the same distribution as if neither A nor
C ever go to sleep because of holding the mutex.
I still can't see how B and D fairness will be impacted as the solution to the
problem is to never treat a waiter as sleeping and let the owner run for more,
but only within the limit of what the waiter is allowed to run for. AFAICS,
both solutions maintain this relationship.
Thanks
--
Qais Yousef
Powered by blists - more mailing lists