[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201119083027.31335-1-mgorman@techsingularity.net>
Date: Thu, 19 Nov 2020 08:30:23 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH v2 0/4] Revisit NUMA imbalance tolerance and fork balancing
Changelog since v1
o Split out patch that moves imbalance calculation
o Strongly connect fork imbalance considerations with adjust_numa_imbalance
When NUMA and CPU balancing were reconciled, there was an attempt to allow
a degree of imbalance but it caused more problems than it solved. Instead,
imbalance was only allowed with an almost idle NUMA domain. A lot of the
problems have since been addressed so it's time for a revisit. There is
also an issue with how fork is balanced across threads. It's mentioned
in this context as patch 3 and 4 should share similar behaviour in terms
of a nodes utilisation.
Patch 1 is just a cosmetic rename
Patch 2 moves an imbalance calculation. It is both a micro-optimisation
and avoids confusing what imbalance means for different group
types.
Patch 3 allows a "floating" imbalance to exist so communicating tasks can
remain on the same domain until utilisation is higher. It aims
to balance compute availability with memory bandwidth.
Patch 4 is the interesting one. Currently fork can allow a NUMA node
to be completely utilised as long as there are idle CPUs until
the load balancer gets involved. This caused serious problems
with a real workload that unfortunately I cannot share many
details about but there is a proxy reproducer.
--
2.26.2
Powered by blists - more mailing lists