[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20191220130056.GA13192@linux.vnet.ibm.com>
Date: Fri, 20 Dec 2019 18:30:56 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, pauld@...hat.com,
valentin.schneider@....com, quentin.perret@....com,
dietmar.eggemann@....com, Morten.Rasmussen@....com,
hdanton@...a.com, parth@...ux.ibm.com, riel@...riel.com,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small degree of load imbalance
between SD_NUMA domains
* Vincent Guittot <vincent.guittot@...aro.org> [2019-12-19 15:45:39]:
> Hi Mel,
>
> Thanks for looking at this NUMA locality vs spreading tasks point.
>
>
> Shouldn't you consider the number of busiest->idle_cpus instead of the busiest->sum_nr_running ?
> and you could simplify by
>
>
> if ((env->sd->flags & SD_NUMA) &&
> ((100 * busiest->group_weight) <= (env->sd->imbalance_pct * (busiest->idle_cpus << 1)))) {
> env->imbalance = 0;
> return;
> }
Are idle_cpus and sum_nr_running good enough metrics to look at a NUMA
level? We could have asymmetric NUMA topology where one DIE/MC/groups may
have more cores than the other. In such a case looking at idle_cpus (or
sum_nr_running) of the group may not always lead us to the right load
balancing solution.
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists