[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200907123106.GA28232@linux.vnet.ibm.com>
Date: Mon, 7 Sep 2020 18:01:06 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
Cc: Mel Gorman <mgorman@...e.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"dietmar.eggemann@....com" <dietmar.eggemann@....com>,
"bsegall@...gle.com" <bsegall@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linuxarm <linuxarm@...wei.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Valentin Schneider <valentin.schneider@....com>,
Phil Auld <pauld@...hat.com>, Hillf Danton <hdanton@...a.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH] sched/fair: use dst group while checking imbalance for
NUMA balancer
> >
> > On Mon, Sep 07, 2020 at 07:27:08PM +1200, Barry Song wrote:
> > > Something is wrong. In find_busiest_group(), we are checking if src has
> > > higher load, however, in task_numa_find_cpu(), we are checking if dst
> > > will have higher load after balancing. It seems it is not sensible to
> > > check src.
> > > It maybe cause wrong imbalance value, for example, if
> > > dst_running = env->dst_stats.nr_running + 1 results in 3 or above, and
> > > src_running = env->src_stats.nr_running - 1 results in 1;
> > > The current code is thinking imbalance as 0 since src_running is smaller
> > > than 2.
> > > This is inconsistent with load balancer.
> > >
I have observed the similar behaviour what Barry Song has documented with a
simple ebizzy with less threads on a 2 node system
ebizzy -t 6 -S 100
We see couple of ebizzy threads moving back and forth between the 2 nodes
because of numa balancer and load balancer trying to do the exact opposite.
However with Barry's patch, couple of tests regress heavily. (Any numa
workload that has shared numa faults).
For example:
perf bench numa mem --no-data_rand_walk -p 1 -t 6 -G 0 -P 3072 -T 0 -l 50 -c
I also don't understand the rational behind checking for dst_running in numa
balancer path. This almost means no numa balancing in lightly loaded scenario.
So agree with Mel that we should probably test more scenarios before
we accept this patch.
> >
> > It checks the conditions if the move was to happen. Have you evaluated
> > this for a NUMA balancing load and confirmed it a) balances properly and
> > b) does not increase the scan rate trying to "fix" the problem?
>
> I think the original code was trying to check if the numa migration
> would lead to new imbalance in load balancer. In case src is A, dst is B, and
> both of them have nr_running as 2. A moves one task to B, then A
> will have 1, B will have 3. In load balancer, A will try to pull task
> from B since B's nr_running is larger than min_imbalance. But the code
> is saying imbalance=0 by finding A's nr_running is smaller than
> min_imbalance.
>
> Will share more test data if you need.
>
> >
> > --
> > Mel Gorman
> > SUSE Labs
>
> Thanks
> Barry
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists