lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200312214736.GA3818@techsingularity.net>
Date:   Thu, 12 Mar 2020 21:47:36 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Jirka Hladky <jhladky@...hat.com>
Cc:     Phil Auld <pauld@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Hillf Danton <hdanton@...a.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load
 balancer v6

On Thu, Mar 12, 2020 at 05:54:29PM +0100, Jirka Hladky wrote:
> >
> > find it unlikely that is common because who acquires such a large machine
> > and then uses a tiny percentage of it.
> 
> 
> I generally agree, but I also want to make a point that AMD made these
> large systems much more affordable with their EPYC CPUs. The 8 NUMA node
> server we are using costs under $8k.
> 
> 
> 
> > This is somewhat of a dilemma. Without the series, the load balancer and
> > NUMA balancer use very different criteria on what should happen and
> > results are not stable.
> 
> 
> Unfortunately, I see instabilities also for the series. This is again for
> the sp_C test with 8 threads executed on dual-socket AMD 7351 (EPYC Naples)
> server with 8 NUMA nodes. With the series applied, the runtime varies from
> 86 to 165 seconds! Could we do something about it? The runtime of 86
> seconds would be acceptable. If we could stabilize this case and get
> consistent runtime around 80 seconds, the problem would be gone.
> 
> Do you experience the similar instability of results on your HW for sp_C
> with low thread counts?
> 

I saw something similar but observed that it depended on whether the
worker tasks got spread wide or not which partially came down to luck.
The question is if it's possible to pick a point where we spread wide
and can recover quickly enough when tasks need to remain close without
knowledge of the future. Putting a balancing limit on tasks that
recently woke would be one option but that could also cause persistent
improper balancing for tasks that wake frequently.

> Runtime with this series applied:
>  $ grep "Time in seconds" *log
> sp.C.x.defaultRun.008threads.loop01.log: Time in seconds =
>   125.73
> sp.C.x.defaultRun.008threads.loop02.log: Time in seconds =
>    87.54
> sp.C.x.defaultRun.008threads.loop03.log: Time in seconds =
>    86.93
> sp.C.x.defaultRun.008threads.loop04.log: Time in seconds =
>   165.98
> sp.C.x.defaultRun.008threads.loop05.log: Time in seconds =
>   114.78
> 
> For comparison, here are vanilla kernel results:
> $ grep "Time in seconds" *log
> sp.C.x.defaultRun.008threads.loop01.log: Time in seconds =
>    59.83
> sp.C.x.defaultRun.008threads.loop02.log: Time in seconds =
>    67.72
> sp.C.x.defaultRun.008threads.loop03.log: Time in seconds =
>    63.62
> sp.C.x.defaultRun.008threads.loop04.log: Time in seconds =
>    55.01
> sp.C.x.defaultRun.008threads.loop05.log: Time in seconds =
>    65.20
> 
> 
> 
> > In *general*, I found that the series won a lot more than it lost across
> > a spread of workloads and machines but unfortunately it's also an area
> > where counter-examples can be found.
> 
> 
> OK, fair enough. I understand that there will always be trade-offs when
> making changes to scheduler like this. And I agree that cases with higher
> system load (where is series is helpful) outweigh the performance drops for
> low threads counts. I was hoping that it would be possible to improve the
> small threads results while keeping the gains for other scenarios:-)  But
> let's be realistic - I would be happy to fix the extreme case mentioned
> above. The other issues where performance drop is about 20% are OK with me
> and are outweighed by the gains for different scenarios.
> 

I'll continue thinking about it but whatever chance there is of
improving it while keeping CPU balancing, NUMA balancing and wake affine
consistent with each other, I think there is no chance with the
inconsistent logic used in the vanilla code :(

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ