[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191031184236.GE5738@pauld.bos.csb>
Date: Thu, 31 Oct 2019 14:42:37 -0400
From: Phil Auld <pauld@...hat.com>
To: Vineeth Remanan Pillai <vpillai@...italocean.com>
Cc: Nishanth Aravamudan <naravamudan@...italocean.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>,
Tim Chen <tim.c.chen@...ux.intel.com>, mingo@...nel.org,
tglx@...utronix.de, pjt@...gle.com, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, Dario Faggioli <dfaggioli@...e.com>,
fweisbec@...il.com, keescook@...omium.org, kerrnel@...gle.com,
Aaron Lu <aaron.lwe@...il.com>,
Aubrey Li <aubrey.intel@...il.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4
Hi Vineeth,
On Wed, Oct 30, 2019 at 06:33:13PM +0000 Vineeth Remanan Pillai wrote:
> Fourth iteration of the Core-Scheduling feature.
>
> This version was aiming mostly at addressing the vruntime comparison
> issues with v3. The main issue seen in v3 was the starvation of
> interactive tasks when competing with cpu intensive tasks. This issue
> is mitigated to a large extent.
>
> We have tested and verified that incompatible processes are not
> selected during schedule. In terms of performance, the impact
> depends on the workload:
> - on CPU intensive applications that use all the logical CPUs with
> SMT enabled, enabling core scheduling performs better than nosmt.
> - on mixed workloads with considerable io compared to cpu usage,
> nosmt seems to perform better than core scheduling.
>
> v4 is rebased on top of 5.3.5(dc073f193b70):
> https://github.com/digitalocean/linux-coresched/tree/coresched/v4-v5.3.5
>
> Changes in v4
> -------------
> - Implement a core wide min_vruntime for vruntime comparison of tasks
> across cpus in a core.
> - Fixes a typo bug in setting the forced_idle cpu.
>
> Changes in v3
> -------------
> - Fixes the issue of sibling picking up an incompatible task
> - Aaron Lu
> - Vineeth Pillai
> - Julien Desfossez
> - Fixes the issue of starving threads due to forced idle
> - Peter Zijlstra
> - Fixes the refcounting issue when deleting a cgroup with tag
> - Julien Desfossez
> - Fixes a crash during cpu offline/online with coresched enabled
> - Vineeth Pillai
> - Fixes a comparison logic issue in sched_core_find
> - Aaron Lu
>
> Changes in v2
> -------------
> - Fixes for couple of NULL pointer dereference crashes
> - Subhra Mazumdar
> - Tim Chen
> - Improves priority comparison logic for process in different cpus
> - Peter Zijlstra
> - Aaron Lu
> - Fixes a hard lockup in rq locking
> - Vineeth Pillai
> - Julien Desfossez
> - Fixes a performance issue seen on IO heavy workloads
> - Vineeth Pillai
> - Julien Desfossez
> - Fix for 32bit build
> - Aubrey Li
>
> TODO
> ----
> - Decide on the API for exposing the feature to userland
> - Investigate the source of the overhead even when no tasks are tagged:
> https://lkml.org/lkml/2019/10/29/242
> - Investigate the performance scaling issue when we have a high number of
> tagged threads: https://lkml.org/lkml/2019/10/29/248
> - Try to optimize the performance for IO-demanding applications:
> https://lkml.org/lkml/2019/10/29/261
>
> ---
>
> Aaron Lu (3):
> sched/fair: wrapper for cfs_rq->min_vruntime
> sched/fair: core wide vruntime comparison
> sched/fair : Wake up forced idle siblings if needed
>
> Peter Zijlstra (16):
> stop_machine: Fix stop_cpus_in_progress ordering
> sched: Fix kerneldoc comment for ia64_set_curr_task
> sched: Wrap rq::lock access
> sched/{rt,deadline}: Fix set_next_task vs pick_next_task
> sched: Add task_struct pointer to sched_class::set_curr_task
> sched/fair: Export newidle_balance()
> sched: Allow put_prev_task() to drop rq->lock
> sched: Rework pick_next_task() slow-path
> sched: Introduce sched_class::pick_task()
> sched: Core-wide rq->lock
> sched: Basic tracking of matching tasks
> sched: A quick and dirty cgroup tagging interface
> sched: Add core wide task selection and scheduling.
> sched/fair: Add a few assertions
> sched: Trivial forced-newidle balancer
> sched: Debug bits...
>
> include/linux/sched.h | 9 +-
> kernel/Kconfig.preempt | 6 +
> kernel/sched/core.c | 847 +++++++++++++++++++++++++++++++++++++--
> kernel/sched/cpuacct.c | 12 +-
> kernel/sched/deadline.c | 99 +++--
> kernel/sched/debug.c | 4 +-
> kernel/sched/fair.c | 346 +++++++++++-----
> kernel/sched/idle.c | 42 +-
> kernel/sched/pelt.h | 2 +-
> kernel/sched/rt.c | 96 ++---
> kernel/sched/sched.h | 246 +++++++++---
> kernel/sched/stop_task.c | 35 +-
> kernel/sched/topology.c | 4 +-
> kernel/stop_machine.c | 2 +
> 14 files changed, 1399 insertions(+), 351 deletions(-)
>
> --
> 2.17.1
>
Unless I'm mistaken 7 of the first 8 of these went into sched/core
and are now in linux (from v5.4-rc1). It may make sense to rebase on
that and simplify the series.
Cheers,
Phil
--
Powered by blists - more mailing lists