[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1529514181-9842-21-git-send-email-srikar@linux.vnet.ibm.com>
Date: Wed, 20 Jun 2018 22:33:01 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Rik van Riel <riel@...riel.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v2 00/19] Fixes for sched/numa_balancing
This patchset based on v4.17, provides few simple cleanups and fixes in
the sched/numa_balancing code. Some of these fixes are specific to systems
having more than 2 nodes. Few patches add per-rq and per-node complexities
to solve what I feel are a fairness/correctness issues.
This version handles the comments given to some of the patches.
It also provides specjbb2005 numbers on a patch basis on a 4 node and
16 node system.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
(higher bops are better)
JVMS v4.17 v4.17+patch %CHANGE
16 25705.2 26158.1 1.731
1 74433 72725 -2.34
Running SPECjbb2005 on a 16 node machine and comparing bops/JVM
(higher bops are better)
JVMS v4.17 v4.17+patch %CHANGE
8 96589.6 113992 15.26
1 181830 174947 -3.93
Only patches, 2, 4, 13 and 16 have changes. The rest of the patches are
unchanged.
r
For overall numbers with v1 running perf-bench please look at
https://lwn.net/ml/linux-kernel/1528106428-19992-1-git-send-email-srikar@linux.vnet.ibm.com
Srikar Dronamraju (19):
sched/numa: Remove redundant field.
sched/numa: Evaluate move once per node
sched/numa: Simplify load_too_imbalanced
sched/numa: Set preferred_node based on best_cpu
sched/numa: Use task faults only if numa_group is not yet setup
sched/debug: Reverse the order of printing faults
sched/numa: Skip nodes that are at hoplimit
sched/numa: Remove unused task_capacity from numa_stats
sched/numa: Modify migrate_swap to accept additional params
sched/numa: Stop multiple tasks from moving to the cpu at the same time
sched/numa: Restrict migrating in parallel to the same node.
sched/numa: Remove numa_has_capacity
mm/migrate: Use xchg instead of spinlock
sched/numa: Updation of scan period need not be in lock
sched/numa: Use group_weights to identify if migration degrades locality
sched/numa: Detect if node actively handling migration
sched/numa: Pass destination cpu as a parameter to migrate_task_rq
sched/numa: Reset scan rate whenever task moves across nodes
sched/numa: Move task_placement closer to numa_migrate_preferred
include/linux/mmzone.h | 4 +-
include/linux/sched.h | 1 -
kernel/sched/core.c | 11 +-
kernel/sched/deadline.c | 2 +-
kernel/sched/debug.c | 4 +-
kernel/sched/fair.c | 325 +++++++++++++++++++++++-------------------------
kernel/sched/sched.h | 6 +-
mm/migrate.c | 20 ++-
mm/page_alloc.c | 2 +-
9 files changed, 187 insertions(+), 188 deletions(-)
--
1.8.3.1
Powered by blists - more mailing lists