[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200117141503.GQ3466@techsingularity.net>
Date: Fri, 17 Jan 2020 14:15:03 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Phil Auld <pauld@...hat.com>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Hillf Danton <hdanton@...a.com>,
Parth Shah <parth@...ux.ibm.com>,
Rik van Riel <riel@...riel.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small load imbalance between low
utilisation SD_NUMA domains v4
On Fri, Jan 17, 2020 at 02:08:13PM +0100, Vincent Guittot wrote:
> > > This patch allows a fixed degree of imbalance of two tasks to exist
> > > between NUMA domains regardless of utilisation levels. In many cases,
> > > this prevents communicating tasks being pulled apart. It was evaluated
> > > whether the imbalance should be scaled to the domain size. However, no
> > > additional benefit was measured across a range of workloads and machines
> > > and scaling adds the risk that lower domains have to be rebalanced. While
> > > this could change again in the future, such a change should specify the
> > > use case and benefit.
> > >
> >
> > Any thoughts on whether this is ok for tip or are there suggestions on
> > an alternative approach?
>
> I have just finished to run some tests on my system with your patch
> and I haven't seen any noticeable any changes so far which was a bit
> expected. The tests that I usually run, use more than 4 tasks on my 2
> nodes system;
This is indeed expected. With more active tasks, normal load balancing
applies.
> the only exception is perf sched pipe and the results
> for this test stays the same with and without your patch.
I never saw much difference with perf sched pipe either. It was
generally within the noise.
> I'm curious
> if this impacts Phil's tests which run LU.c benchmark with some
> burning cpu tasks
I didn't see any problem with LU.c whether parallelised by openMPI or
openMP but an independent check would be nice.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists