lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200521110623.GB7167@techsingularity.net>
Date:   Thu, 21 May 2020 12:06:23 +0100
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Jirka Hladky <jhladky@...hat.com>
Cc:     Hillf Danton <hdanton@...a.com>, Phil Auld <pauld@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Valentin Schneider <valentin.schneider@....com>,
        LKML <linux-kernel@...r.kernel.org>,
        Douglas Shakshober <dshaks@...hat.com>,
        Waiman Long <longman@...hat.com>,
        Joe Mario <jmario@...hat.com>, Bill Gray <bgray@...hat.com>,
        "aokuliar@...hat.com" <aokuliar@...hat.com>,
        "kkolakow@...hat.com" <kkolakow@...hat.com>
Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load
 balancer v6

On Wed, May 20, 2020 at 03:58:01PM +0200, Jirka Hladky wrote:
> Hi Hillf, Mel and all,
> 
> thanks for the patch! It has produced really GOOD results.
> 
> 1) It has fixed performance problems with 5.7 vanilla kernel for
> single-tenant workload and low system load scenarios, without
> performance degradation for the multi-tenant tasks. It's producing the
> same results as the previous proof-of-concept patch where
> adjust_numa_imbalance function was modified to be a no-op (returning
> the same value of imbalance as it gets on the input).
> 

I worry that what it's doing is sort of reverts the patch but in a
relatively obscure way. We already know that a side-effect of having a
pair of tasks sharing cache is that the wakeup paths can be more expensive
and the difference in the wakeup path exceeds the advantage of using
local memory. However, I don't think the right approach long term is to
keep tasks artificially apart so that a different wakeup path is used as
a side-effect.

The patch also needs a changelog and better comments to follow exactly
what the rationale is. Take this part


+               //No imbalance computed without spare capacity
+               if (env->dst_stats.node_type != env->src_stats.node_type)
+                       goto check_imb;

I'm ignoring the coding style of c++ comments but minimally that should
be fixed. More importantly, node_type can be one of node_overloaded,
node_has_spare or node_fully busy and this is checking if there is a
mismatch. However, it's not taking into account whether the dst_node
is more or less loaded than the src and does not appear to be actually
checking spare capacity like the comment suggests.

Then there is this part

+               imbalance = adjust_numa_imbalance(imbalance,
+                                               env->src_stats.nr_running);
+
+               //Do nothing without imbalance
+               if (!imbalance) {
+                       imbalance = 2;
+                       goto check_imb;
+               }

So... if there is no imbalance, assume there is an imbalance of 2, jump to
a branch that will always be false and fall through to code that ignores
the value of imbalance ...... it's hard to see exactly why that code flow
is ideal.

> 2) We have also added Mel's netperf-cstate-small-cross-socket test to
> our test battery:
> https://github.com/gormanm/mmtests/blob/master/configs/config-network-netperf-cstate-small-cross-socket
> 
> Mel told me that he had seen significant performance improvements with
> 5.7 over 5.6 for the netperf-cstate-small-cross-socket scenario.
> 
> Out of 6 different patches we have tested, your patch has performed
> the best for this scenario. Compared to vanilla, we see minimal
> performance degradation of 2.5% for the udp stream throughput and 0.6%
> for the tcp throughput.

Which implies that we are once again using remote memory for netperf and
possibly getting some interference from NUMA balancing hinting faults.

> The testing was done on a dual-socket system
> with Gold 6132 CPU.
> 
> @Mel - could you please test Hillf's patch with your full testing
> suite? So far, it looks very promising, but I would like to check the
> patch thoroughly to make sure it does not hurt performance in other
> areas.
> 

I can't at the moment due to a backlog on my test grid which isn't helped
that by the fact I lost two days development time due to a thrashed work
machine. That aside, I'm finding it hard to see exactly why the patch
is suitable. What I've seen so far is that there are two outstanding
problems after the rewritten load balancer and the reconcilation between
load balancer and NUMA balancer.

The first is that 57abff067a08 ("sched/fair: Rework find_idlest_group()")
has shown up some problems with LKP reporting it here
https://lore.kernel.org/lkml/20200514141526.GA30976@xsang-OptiPlex-9020/
but I've also seen numerous workloads internally bisect to the same
commit. This patch is meant to ensure that finding the busiest group uses
similar logic to finding the idlest group but something is hiding in there.

The second is that waking two tasks sharing tasks can be more expensive
than waking two remote tasks but it's preferable to fix the wakeup logic
than keep related tasks on separate caches just because it happens to
generate good numbers in some cases.

I feel that it makes sense to try and get both of those issues resolved
before making adjust_numa_imbalance a tunable or reverting it but I
really think it makes sense for communicating tasks to be sharing cache
if possible unless a node is overloaded or limited by memory bandwidth.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ