[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180907101139.20760-1-mgorman@techsingularity.net>
Date: Fri, 7 Sep 2018 11:11:35 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>, Rik van Riel <riel@...riel.com>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 0/4] Follow-up fixes for v4.19-rc1 NUMA balancing
Srikar had an automatic NUMA balancing series merged during the 4.19 window
and there some issues I missed during review that this series addresses.
Patches 1-2 are simply removing redundant code and calculations that are
never used.
Patch 3 makes the observation that we can call select_idle_sibling during
NUMA placement several times for each idle CPU on a socket. The patch
stops the search on the first idle CPU. While there is the risk that
parallel callers will stack on the same idle CPU, the current code has
the same problem.
Patch 4 is the one that needs the most examination. I believe intent of
load_too_imbalanced was to stop automatic NUMA balancing fighting the load
balancer. Unfortunately, when a machine is lightly utilised there are idle
CPUs and tasks are allowed to migrate even though the load is imbalanced.
In some cases, this limits memory bandwidth and can perform worse even if
data locality is fine. The patch corrects the condition.
If they pass review, I think it would suggest considering them a fix for
4.19 instead of deferring to 4.20.
kernel/sched/fair.c | 35 ++++++++---------------------------
1 file changed, 8 insertions(+), 27 deletions(-)
--
2.16.4
Powered by blists - more mailing lists