lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200117143212.GB6339@pauld.bos.csb>
Date:   Fri, 17 Jan 2020 09:32:12 -0500
From:   Phil Auld <pauld@...hat.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Vincent Guittot <vincent.guittot@...aro.org>,
        Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Quentin Perret <quentin.perret@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <Morten.Rasmussen@....com>,
        Hillf Danton <hdanton@...a.com>,
        Parth Shah <parth@...ux.ibm.com>,
        Rik van Riel <riel@...riel.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small load imbalance between low
 utilisation SD_NUMA domains v4

On Fri, Jan 17, 2020 at 02:15:03PM +0000 Mel Gorman wrote:
> On Fri, Jan 17, 2020 at 02:08:13PM +0100, Vincent Guittot wrote:
> > > > This patch allows a fixed degree of imbalance of two tasks to exist
> > > > between NUMA domains regardless of utilisation levels. In many cases,
> > > > this prevents communicating tasks being pulled apart. It was evaluated
> > > > whether the imbalance should be scaled to the domain size. However, no
> > > > additional benefit was measured across a range of workloads and machines
> > > > and scaling adds the risk that lower domains have to be rebalanced. While
> > > > this could change again in the future, such a change should specify the
> > > > use case and benefit.
> > > >
> > >
> > > Any thoughts on whether this is ok for tip or are there suggestions on
> > > an alternative approach?
> > 
> > I have just finished to run some tests on my system with your patch
> > and I haven't seen any noticeable any changes so far which was a bit
> > expected. The tests that I usually run, use more than 4 tasks on my 2
> > nodes system;
> 
> This is indeed expected. With more active tasks, normal load balancing
> applies.
> 
> > the only exception is perf sched  pipe and the results
> > for this test stays the same with and without your patch.
> 
> I never saw much difference with perf sched pipe either. It was
> generally within the noise.
> 
> > I'm curious
> > if this impacts Phil's tests which run LU.c benchmark with some
> > burning cpu tasks
> 
> I didn't see any problem with LU.c whether parallelised by openMPI or
> openMP but an independent check would be nice.
> 

My particular case is not straight up LU.c. It's the group imbalance 
setup which was totally borken before Vincent's work. The test setup
is designed to show how the load balancer (used to) fail by using group
scaled "load" at larger (NUMA) domain levels. It's very susceptible to 
imbalances so I wanted to make sure your patch allowing imblanances 
didn't re-break it. 


Cheers,
Phil


> -- 
> Mel Gorman
> SUSE Labs
> 

-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ