[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180910094147.GH1719@techsingularity.net>
Date: Mon, 10 Sep 2018 10:41:47 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...nel.org>,
Rik van Riel <riel@...riel.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] sched/numa: Do not move imbalanced load purely on
the basis of an idle CPU
On Fri, Sep 07, 2018 at 01:37:39PM +0100, Mel Gorman wrote:
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index d59d3e00a480..d4c289c11012 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -1560,7 +1560,7 @@ static bool task_numa_compare(struct task_numa_env *env,
> > > goto unlock;
> > >
> > > if (!cur) {
> > > - if (maymove || imp > env->best_imp)
> > > + if (maymove)
> > > goto assign;
> > > else
> > > goto unlock;
> >
> > Srikar's patch here:
> >
> > http://lkml.kernel.org/r/1533276841-16341-4-git-send-email-srikar@linux.vnet.ibm.com
> >
> > Also frobs this condition, but in a less radical way. Does that yield
> > similar results?
>
> I can check. I do wonder of course if the less radical approach just means
> that automatic NUMA balancing and the load balancer simply disagree about
> placement at a different time. It'll take a few days to have an answer as
> the battery of workloads to check this take ages.
>
Tests completed over the weekend and I've found that the performance of
both patches are very similar for two machines (both 2 socket) running a
variety of workloads. Hence, I'm not worried about which patch gets picked
up. However, I would prefer my own on the grounds that the additional
complexity does not appear to get us anything. Of course, that changes if
Srikar's tests on his larger ppc64 machines show the more complex approach
is justified.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists