lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ftmo47z4.fsf@yhuang-dev.intel.com>
Date:   Tue, 30 Jul 2019 09:38:39 +0800
From:   "Huang\, Ying" <ying.huang@...el.com>
To:     Mel Gorman <mgorman@...e.de>
Cc:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, Rik van Riel <riel@...hat.com>,
        <jhladky@...hat.com>, <lvenanci@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RESEND] autonuma: Fix scan period updating

Mel Gorman <mgorman@...e.de> writes:

> On Mon, Jul 29, 2019 at 04:16:28PM +0800, Huang, Ying wrote:
>> Srikar Dronamraju <srikar@...ux.vnet.ibm.com> writes:
>> 
>> >> >> 
>> >> >> if (lr_ratio >= NUMA_PERIOD_THRESHOLD)
>> >> >>     slow down scanning
>> >> >> else if (sp_ratio >= NUMA_PERIOD_THRESHOLD) {
>> >> >>     if (NUMA_PERIOD_SLOTS - lr_ratio >= NUMA_PERIOD_THRESHOLD)
>> >> >>         speed up scanning
>> >> 
>> >> Thought about this again.  For example, a multi-threads workload runs on
>> >> a 4-sockets machine, and most memory accesses are shared.  The optimal
>> >> situation will be pseudo-interleaving, that is, spreading memory
>> >> accesses evenly among 4 NUMA nodes.  Where "share" >> "private", and
>> >> "remote" > "local".  And we should slow down scanning to reduce the
>> >> overhead.
>> >> 
>> >> What do you think about this?
>> >
>> > If all 4 nodes have equal access, then all 4 nodes will be active nodes.
>> >
>> > From task_numa_fault()
>> >
>> > 	if (!priv && !local && ng && ng->active_nodes > 1 &&
>> > 				numa_is_active_node(cpu_node, ng) &&
>> > 				numa_is_active_node(mem_node, ng))
>> > 		local = 1;
>> >
>> > Hence all accesses will be accounted as local. Hence scanning would slow
>> > down.
>> 
>> Yes.  You are right!  Thanks a lot!
>> 
>> There may be another case.  For example, a workload with 9 threads runs
>> on a 2-sockets machine, and most memory accesses are shared.  7 threads
>> runs on the node 0 and 2 threads runs on the node 1 based on CPU load
>> balancing.  Then the 2 threads on the node 1 will have "share" >>
>> "private" and "remote" >> "local".  But it doesn't help to speed up
>> scanning.
>> 
>
> Ok, so the results from the patch are mostly neutral. There are some
> small differences in scan rates depending on the workload but it's not
> universal and the headline performance is sometimes worse. I couldn't
> find something that would justify the change on its own.

Thanks a lot for your help!

> I think in the short term -- just fix the comments.

Then we will change the comments to something like,

"Slow down scanning if most memory accesses are private."

It's hard to be understood.  Maybe we just keep the code and comments as
it was until we have better understanding.

> For the shared access consideration, the scan rate is important but so too
> is the decision on when pseudo interleaving should be used. Both should
> probably be taken into account when making changes in this area. The
> current code may not be optimal but it also has not generated bug reports,
> high CPU usage or obviously bad locality decision in the field.  Hence,
> for this patch or a similar series, it is critical that some workloads are
> selected that really care about the locality of shared access and evaluate
> based on that. Initially it was done with a large battery of tests run
> by different people but some of those people have changed role since and
> would not be in a position to rerun the tests. There also was the issue
> that when those were done, NUMA balancing was new so it's comparative
> baseline was "do nothing at all".

Yes.  I totally agree that we should change the behavior based on
testing.

Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ