[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200306024116.GA16400@ziqianlu-desktop.localdomain>
Date: Fri, 6 Mar 2020 10:41:16 +0800
From: Aaron Lu <aaron.lwe@...il.com>
To: Aubrey Li <aubrey.intel@...il.com>
Cc: Phil Auld <pauld@...hat.com>,
Vineeth Remanan Pillai <vpillai@...italocean.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Dario Faggioli <dfaggioli@...e.com>,
Frédéric Weisbecker <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4
On Thu, Mar 05, 2020 at 09:45:15PM +0800, Aubrey Li wrote:
> On Fri, Feb 28, 2020 at 10:54 AM Aaron Lu <aaron.lwe@...il.com> wrote:
> >
> > When the core wide weight is somewhat balanced, yes I definitely agree.
> > But when core wide weight mismatch a lot, I'm not so sure since if these
> > high weight task is spread among cores, with the feature of core
> > scheduling, these high weight tasks can get better performance.
>
> It depends.
>
> Say TaskA(cookie 1) and TaskB(cookie1) has high weight,
> TaskC(cookie 2) and Task D(cookie 2) has low weight.
> And we have two cores 4 CPU.
>
> If we dispatch
> - TaskA and TaskB on Core0,
> - TaskC and TaskD on Core1,
>
> with coresched enabled, all 4 tasks can run all the time.
Although all tasks get CPU, TaskA and TaskB are competing hardware
resources and will run slower.
> But if we dispatch
> - TaskA on Core0.CPU0, TaskB on Core1.CPU2,
> - TaskC on Core0.CPU1, TaskB on Core1.CPU3,
>
> with coresched enabled, when TaskC is running, TaskA will be forced
> off CPU and replaced with a forced idle thread.
Not likely to happen since TaskA and TaskB's share will normally be a
lot higher to make sure they get the CPU most of the time.
>
> Things get worse if TaskA and TaskB share some data and can get
> benefit from the core level cache.
That's a good point and hard to argue.
I'm mostly considering colocating redis-server(the main workload) with
other compute intensive workload. redis-server can be idle most of the
time but needs every hardware resource when it runs to meet its latency
and throughput requirement. Test at my side shows redis-server's
throughput can be about 30% lower when two redis-servers run at the
same core(throughput is about 80000 when runs exclusively on a core VS
about 56000 when runs with sibling thread busy) IIRC.
So my use case here is that I don't really care about low weight task's
performance when high weight task demands CPU. I understand that there
will be other use cases that also care about low weight task's
performance. So what I have done is to make the two task's weight
difference as large as possible to signal that the low weight task is
not important, maybe I can also try to tag low weight task as SCHED_IDLE
ones and then we can happily sacrifice SCHED_IDLE task's performance?
> > So this appeared to me like a question of: is it desirable to protect/enhance
> > high weight task performance in the presence of core scheduling?
>
> This sounds to me a policy VS mechanism question. Do you have any idea
> how to spread high weight task among the cores with coresched enabled?
Yes I would like to get us on the same page of the expected behaviour
before jumping to the implementation details. As for how to achieve
that: I'm thinking about to make core wide load balanced and then high
weight task shall spread on different cores. This isn't just about load
balance, the initial task placement will also need to be considered of
course if the high weight task only runs a small period.
Powered by blists - more mailing lists