[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37ec5587dbb4035b883e5a69b56da4cc67f0e5ff.camel@surriel.com>
Date: Wed, 18 Dec 2019 21:58:01 -0500
From: Rik van Riel <riel@...riel.com>
To: Mel Gorman <mgorman@...hsingularity.net>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, pauld@...hat.com,
valentin.schneider@....com, srikar@...ux.vnet.ibm.com,
quentin.perret@....com, dietmar.eggemann@....com,
Morten.Rasmussen@....com, hdanton@...a.com, parth@...ux.ibm.com,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small degree of load imbalance
between SD_NUMA domains
On Wed, 2019-12-18 at 15:44 +0000, Mel Gorman wrote:
> + /*
> + * Ignore imbalance unless busiest sd is close
> to 50%
> + * utilisation. At that point balancing for
> memory
> + * bandwidth and potentially avoiding
> unnecessary use
> + * of HT siblings is as relevant as memory
> locality.
> + */
> + imbalance_max = (busiest->group_weight >> 1) -
> imbalance_adj;
> + if (env->imbalance <= imbalance_adj &&
> + busiest->sum_nr_running < imbalance_max) {
> + env->imbalance = 0;
> + }
> + }
> return;
> }
I can see how the 50% point is often great for HT,
but I wonder if that is also the case for SMT4 and
SMT8 systems...
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists