[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190426084222.GC126896@gmail.com>
Date: Fri, 26 Apr 2019 10:42:22 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Aubrey Li <aubrey.intel@...il.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Vineeth Remanan Pillai <vpillai@...italocean.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Subhra Mazumdar <subhra.mazumdar@...cle.com>,
Fr?d?ric Weisbecker <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
Aaron Lu <aaron.lwe@...il.com>,
Valentin Schneider <valentin.schneider@....com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jiri Kosina <jkosina@...e.cz>
Subject: Re: [RFC PATCH v2 00/17] Core scheduling v2
* Mel Gorman <mgorman@...hsingularity.net> wrote:
> > > Same -- performance is better until the machine gets saturated and
> > > disabling HT hits scaling limits earlier.
> >
> > Interesting. This strongly suggests sub-optimal SMT-scheduling in the
> > non-saturated HT case, i.e. a scheduler balancing bug.
> >
>
> Yeah, it does but mpstat didn't appear to indicate that SMT siblings are
> being used prematurely so it's a bit of a curiousity.
>
> > As long as loads are clearly below the physical cores count (which they
> > are in the early phases of your table) the scheduler should spread tasks
> > without overlapping two tasks on the same core.
> >
>
> It should, but it's not perfect. For example, wake_affine_idle does not
> take sibling activity into account even though select_idle_sibling *may*
> take it into account. Even select_idle_sibling in its fast path may use
> an SMT sibling instead of searching.
>
> There are also potential side-effects with cpuidle. Some workloads
> migration around the socket as they are communicating because of how the
> search for an idle CPU works. With SMT on, there is potentially a longer
> opportunity for a core to reach a deep c-state and incur a bigger wakeup
> latency. This is a very weak theory but I've seen cases where latency
> sensitive workloads with only two communicating tasks are affected by
> CPUs reaching low c-states due to migrations.
>
> > Clearly it doesn't.
> >
>
> It's more that it's best effort to wakeup quickly instead of being perfect
> by using an expensive search every time.
Yeah, but your numbers suggest that for *most* not heavily interacting
under-utilized CPU bound workloads we hurt in the 5-10% range compared to
no-SMT - more in some cases.
So we avoid a maybe 0.1% scheduler placement overhead but inflict 5-10%
harm on the workload, and also blow up stddev by randomly co-scheduling
two tasks on the same physical core? Not a good trade-off.
I really think we should implement a relatively strict physical core
placement policy in the under-utilized case, and resist any attempts to
weaken this for special workloads that ping-pong quickly and benefit from
sharing the same physical core.
I.e. as long as load is kept below ~50% the SMT and !SMT benchmark
results and stddev numbers should match up. (With a bit of a leewy if the
workload gets near to 50% or occasionally goes above it.)
There's absolutely no excluse for these numbers at 30-40% load levels I
think.
Thanks,
Ingo
Powered by blists - more mailing lists