[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211210093307.31701-1-mgorman@techsingularity.net>
Date: Fri, 10 Dec 2021 09:33:05 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Mike Galbraith <efault@....de>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Gautham Shenoy <gautham.shenoy@....com>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH v4 0/2] Adjust NUMA imbalance for multiple LLCs
Changelog since V3
o Calculate imb_numa_nr for multiple SD_NUMA domains
o Restore behaviour where communicating pairs remain on the same node
Commit 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA
nodes") allowed an imbalance between NUMA nodes such that communicating
tasks would not be pulled apart by the load balancer. This works fine when
there is a 1:1 relationship between LLC and node but can be suboptimal
for multiple LLCs if independent tasks prematurely use CPUs sharing cache.
The series addresses two problems -- inconsistent use of scheduler domain
weights and sub-optimal performance when there are many LLCs per NUMA node.
include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 ++++++++++++++++---------------
kernel/sched/topology.c | 39 ++++++++++++++++++++++++++++++++++
3 files changed, 59 insertions(+), 17 deletions(-)
--
2.31.1
Mel Gorman (2):
sched/fair: Use weight of SD_NUMA domain in find_busiest_group
sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans
multiple LLCs
include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 +++++++++++++++++----------------
kernel/sched/topology.c | 37 ++++++++++++++++++++++++++++++++++
3 files changed, 57 insertions(+), 17 deletions(-)
--
2.31.1
Powered by blists - more mailing lists