[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7667ded2e740714b0fd8bf82c258359e326c9765.camel@surriel.com>
Date: Mon, 02 Sep 2019 21:44:30 -0400
From: Rik van Riel <riel@...riel.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
linux-kernel@...r.kernel.org
Cc: kernel-team@...com, pjt@...gle.com, peterz@...radead.org,
mingo@...hat.com, morten.rasmussen@....com, tglx@...utronix.de,
mgorman@...hsingularity.net, vincent.guittot@...aro.org
Subject: Re: [PATCH RFC v4 0/15] sched,fair: flatten CPU controller runqueues
On Mon, 2019-09-02 at 12:53 +0200, Dietmar Eggemann wrote:
> On 22/08/2019 04:17, Rik van Riel wrote:
> > My main TODO items for the next period of time are likely going to
> > be testing, testing, and testing. I hope to find and flush out any
> > corner case I can find, and make sure performance does not regress
> > with any workloads, and hopefully improves some.
>
> I did some testing with a small & simple rt-app based test-case:
>
> 2 CPUs (rq->cpu_capacity_orig=1024), CPUfreq performance governor
>
> 2 taskgroups /tg0 and /tg1
>
> 6 CFS tasks (periodic, 8/16ms (runtime/period))
>
> /tg0 (cpu.shares=1024) ran 4 tasks and /tg1 (cpu.shares=1024) ran 2
> tasks
>
> (arm64 defconfig with !CONFIG_NUMA_BALANCING,
> !CONFIG_SCHED_AUTOGROUP)
>
> ---
>
> v5.2:
>
> The 2 /tg1 tasks ran 8/16ms. The 4 /tg0 tasks ran 4/16ms in the
> beginning and then 8/16ms after the 2 /tg1 tasks did finish.
>
> ---
>
> v5.2 + v4:
>
> There is no runtime/period pattern visible anymore. I see a lot of
> extra
> wakeup latency for those tasks though.
>
> v5.2 + (v4 without 07/15, 08/15, 15/15) didn't change much.
One thing to keep in mind is that with the hierarchical
CPU controller code, you are always either comparing
tg0 and tg1 (of equal priority), or tasks of equal priority,
once the load balancer has equalized the load between CPUs.
With the flat CPU controller, the preemption code is comparing
sched_entities with other sched_entities that have 2x the
priority, similar to a nice level 0 entity compared against a
nice level ~3 task.
I do not know whether the code has ever given a predictable
scheduling pattern when the CPU is fully loaded with a mix of
different priority tasks that each want a 50% duty cycle.
But maybe it has, and I need to look into that :)
Figuring out exactly what the preemption code should do might
be a good discussion for Plumbers, too.
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists