lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180912105727.GJ1719@techsingularity.net>
Date:   Wed, 12 Sep 2018 11:57:27 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Rik van Riel <riel@...riel.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] sched/numa: Do not move imbalanced load purely on
 the basis of an idle CPU

On Wed, Sep 12, 2018 at 11:57:42AM +0200, Ingo Molnar wrote:
> > * Mel Gorman <mgorman@...hsingularity.net> [2018-09-10 10:41:47]:
> > 
> > > On Fri, Sep 07, 2018 at 01:37:39PM +0100, Mel Gorman wrote:
> > > > > Srikar's patch here:
> > > > > 
> > > > >   http://lkml.kernel.org/r/1533276841-16341-4-git-send-email-srikar@linux.vnet.ibm.com
> > > > > 
> > > > > Also frobs this condition, but in a less radical way. Does that yield
> > > > > similar results?
> > > > 
> > > > I can check. I do wonder of course if the less radical approach just means
> > > > that automatic NUMA balancing and the load balancer simply disagree about
> > > > placement at a different time. It'll take a few days to have an answer as
> > > > the battery of workloads to check this take ages.
> > > > 
> > > 
> > > Tests completed over the weekend and I've found that the performance of
> > > both patches are very similar for two machines (both 2 socket) running a
> > > variety of workloads. Hence, I'm not worried about which patch gets picked
> > > up. However, I would prefer my own on the grounds that the additional
> > > complexity does not appear to get us anything. Of course, that changes if
> > > Srikar's tests on his larger ppc64 machines show the more complex approach
> > > is justified.
> > > 
> > 
> > Running SPECJbb2005. Higher bops are better.
> > 
> > Kernel A = 4.18+ 13 sched patches part of v4.19-rc1.
> > Kernel B = Kernel A + 6 patches (http://lore.kernel.org/lkml/1533276841-16341-1-git-send-email-srikar@linux.vnet.ibm.com)
> > Kernel C = Kernel B - (Avoid task migration for small numa improvement) i.e
> > 	http://lore.kernel.org/lkml/1533276841-16341-4-git-send-email-srikar@linux.vnet.ibm.com
> > 	+ 2 patches from Mel
> > 	(Do not move imbalanced load purely)
> > 	http://lore.kernel.org/lkml/20180907101139.20760-5-mgorman@techsingularity.net
> > 	(Stop comparing tasks for NUMA placement)
> > 	http://lore.kernel.org/lkml/20180907101139.20760-4-mgorman@techsingularity.net
> 
> We absolutely need the 'best' pre-regression baseline kernel measurements as well - was it 
> vanilla v4.17?
> 

That will hit a separate problem -- the scheduler patches that prefer
keeping new children local instead of spreading wide prematurely. The patch
in question favours communicating tasks (e.g. short-lived communicating
processes from shell scripts) but hurts loads that prefer spreading early
(e.g. STREAM). So while a comparison tells us something, it tells us
relatively little about this series in isolation.

The comparison with 4.17 is expected to be resolved by allowing data to
migrate faster if the load balancer spreads the load without wake-affine
disagreeing about placement. Patches for that exist but they were confirmed
to be working correctly on top of and old version of Srikar's series
based on 4.18. If we get get this series resolved, I can rebase the old
series and baring any major surprises, that should improve things
overall while mitigating the STREAM regression against 4.17.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ