[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190701090204.GQ3402@hirez.programming.kicks-ass.net>
Date: Mon, 1 Jul 2019 11:02:04 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: subhra mazumdar <subhra.mazumdar@...cle.com>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, tglx@...utronix.de,
steven.sistare@...cle.com, dhaval.giani@...cle.com,
daniel.lezcano@...aro.org, vincent.guittot@...aro.org,
viresh.kumar@...aro.org, tim.c.chen@...ux.intel.com,
mgorman@...hsingularity.net, Paul Turner <pjt@...gle.com>,
riel@...riel.com, morten.rasmussen@....com
Subject: Re: [RESEND PATCH v3 0/7] Improve scheduler scalability for fast path
On Wed, Jun 26, 2019 at 06:29:12PM -0700, subhra mazumdar wrote:
> Hi,
>
> Resending this patchset, will be good to get some feedback. Any suggestions
> that will make it more acceptable are welcome. We have been shipping this
> with Unbreakable Enterprise Kernel in Oracle Linux.
>
> Current select_idle_sibling first tries to find a fully idle core using
> select_idle_core which can potentially search all cores and if it fails it
> finds any idle cpu using select_idle_cpu. select_idle_cpu can potentially
> search all cpus in the llc domain. This doesn't scale for large llc domains
> and will only get worse with more cores in future.
>
> This patch solves the scalability problem by:
> - Setting an upper and lower limit of idle cpu search in select_idle_cpu
> to keep search time low and constant
> - Adding a new sched feature SIS_CORE to disable select_idle_core
>
> Additionally it also introduces a new per-cpu variable next_cpu to track
> the limit of search so that every time search starts from where it ended.
> This rotating search window over cpus in LLC domain ensures that idle
> cpus are eventually found in case of high load.
Right, so we had a wee conversation about this patch series at OSPM, and
I don't see any of that reflected here :-(
Specifically, given that some people _really_ want the whole L3 mask
scanned to reduce tail latency over raw throughput, while you guys
prefer the other way around, it was proposed to extend the task model.
Specifically something like a latency-nice was mentioned (IIRC) where a
task can give a bias but not specify specific behaviour. This is very
important since we don't want to be ABI tied to specific behaviour.
Some of the things we could tie to this would be:
- select_idle_siblings; -nice would scan more than +nice,
- wakeup preemption; when the wakee has a relative smaller
latency-nice value than the current running task, it might preempt
sooner and the other way around of course.
- pack-vs-spread; +nice would pack more with like tasks (since we
already spread by default [0] I don't think -nice would affect much
here).
Hmmm?
Powered by blists - more mailing lists