[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20200218105355.GN3466@techsingularity.net>
Date: Tue, 18 Feb 2020 10:53:55 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Hillf Danton <hdanton@...a.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Valentin Schneider <valentin.schneider@....com>,
Phil Auld <pauld@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 06/13] sched/numa: Use similar logic to the load balancer
for moving between domains with spare capacity
On Tue, Feb 18, 2020 at 05:59:15PM +0800, Hillf Danton wrote:
> > Given that adjust_numa_imbalance takes the imbalance as the first
> > parameter, not a boolean and it's not unconditionally true, I don't
> > get what you mean.
>
> My bad.
>
> > Can you propose a patch on top of the entire series
> > explaining what you suggest please?
>
> I just want to avoid splitting the pair of tasks on the src node as
> described by the comment in adjust_numa_imbalance() across two nodes
> despite idle cpus that are available on the dst node.
>
Ah ok, so yes, this is something that needs to be done but it should be
a separate patch after this series is complete. It's very easy to get it
wrong and introduce regressions so I want to get the NUMA balancer and
load balancer reconciled first.
> If there are more than 2 tasks running on src node then try to migrate
> task to dst node in order to decrease imbalance.
>
That should be happening already because
imbalance = max(0, dst_running - src_running);
I didn't take the absolute difference and excess tasks on the src should
still be able to migrate
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists