lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 11 May 2023 12:14:46 +0530
From:   Raghavendra K T <raghavendra.kt@....com>
To:     kernel test robot <oliver.sang@...el.com>
Cc:     oe-lkp@...ts.linux.dev, lkp@...el.com,
        linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        David Hildenbrand <david@...hat.com>,
        Disha Talreja <dishaa.talreja@....com>,
        Ingo Molnar <mingo@...hat.com>,
        Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
        ying.huang@...el.com, feng.tang@...el.com, fengwei.yin@...el.com,
        yu.c.chen@...el.com
Subject: Re: [linus:master] [sched/numa] fc137c0dda:
 autonuma-benchmark.numa01.seconds 118.9% regression

On 5/10/2023 1:25 PM, kernel test robot wrote:
> 
> 
> Hello,
> 
> kernel test robot noticed a 118.9% regression of autonuma-benchmark.numa01.seconds on:
> 
> 
> commit: fc137c0ddab29b591db6a091dc6d7ce20ccb73f2 ("sched/numa: enhance vma scanning logic")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> 
> testcase: autonuma-benchmark
> test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory
> parameters:
> 
> 	iterations: 4x
> 	test: numa02_SMT
> 	cpufreq_governor: performance
> 
> 
> In addition to that, the commit also has significant impact on the following tests:
> 
> +------------------+------------------------------------------------------------------------------------------------+
> | testcase: change | autonuma-benchmark: autonuma-benchmark.numa01.seconds 39.3% regression                         |
> | test machine     | 224 threads 2 sockets (Sapphire Rapids) with 256G memory                                       |
> | test parameters  | cpufreq_governor=performance                                                                   |
> |                  | iterations=4x                                                                                  |
> |                  | test=numa02_SMT                                                                                |
> +------------------+------------------------------------------------------------------------------------------------+
> | testcase: change | autonuma-benchmark: autonuma-benchmark.numa01.seconds 48.9% regression                         |
> | test machine     | 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz (Cascade Lake) with 128G memory |
> | test parameters  | cpufreq_governor=performance                                                                   |
> |                  | debug-setup=no-monitor                                                                         |
> |                  | iterations=4x                                                                                  |
> |                  | test=numa02_SMT                                                                                |
> +------------------+------------------------------------------------------------------------------------------------+
> 
[...]

Hello,

Thanks for the detailed analysis. I have posted a RFC patch to address
this issue [1]. (that patch needs windows = 0 initialized FYI if needs
to be applied).  will be posting RFC V2 soon. Will add your reported-by
to that patchset. But one thing to note is [1] will be bringing back
*some* of the system overhead of vma scanning.

Here are some observations/Clarifications on numa01 test:

- numa01 benchmark improvements I got for numascan improvement patchset
[2] were based on mmtests' numa01, lets call  mmtest_numa01.
(some how this is not run in LKP ?)

- lkp_numa01 = mmtests' numa01_THREAD_ALLOC case mentioned in the
patch[1]

With numa scan enhancement patches there is a huge improvement regarding
system time overhead of vma scanning since we filter out scanning by
tasks which have not accessed VMA. This has benefited mmtest_numa01

However in case of lkp_numa01 we are observing that less PTE updates
happening because of filtering. (we can say a corner case of disjoint
set vma). This has caused regression you have reported.

backup:
----------
lkp_numa01:
3GB allocated memory that is distributed evenly to threads (24MB chunk).
24MB is then bzeroed by each thread 1000 times
mmtest_numa01:
entire 3GB bzeroed by all threads 50 times

[1]. 
https://lore.kernel.org/lkml/cover.1683033105.git.raghavendra.kt@amd.com/

[2] 
https://lore.kernel.org/lkml/cover.1677672277.git.raghavendra.kt@amd.com/T/#t

Thanks and Regards
- Raghu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ