lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4b781c91-2947-a029-3558-f4a49130e5e0@amd.com>
Date:   Fri, 20 May 2022 20:47:47 +0530
From:   K Prateek Nayak <kprateek.nayak@....com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] Mitigate inconsistent NUMA imbalance behaviour

Hello Mel,

Thank you for looking at the results.

On 5/20/2022 3:48 PM, Mel Gorman wrote:
> On Fri, May 20, 2022 at 10:28:02AM +0530, K Prateek Nayak wrote:
>> Hello Mel,
>>
>> We tested the patch series on a our systems.
>>
>> tl;dr
>>
>> Results of testing:
>> - Benefits short running Stream tasks in NPS2 and NPS4 mode.
>> - Benefits seen for tbench in NPS1 mode for 8-128 worker count.
>> - Regression in Hackbench with 16 groups in NPS1 mode. A rerun for
>>   same data point suggested run to run variation on patched kernel.
>> - Regression in case of tbench with 32 and 64 workers in NPS2 mode.
>>   Patched kernel however seems to report more stable value for 64
>>   worker count compared to tip.
>> - Slight regression in schbench in NPS2 and NPS4 mode for large
>>   worker count but we did spot some run to run variation with
>>   both tip and patched kernel.
>>
>> Below are all the detailed numbers for the benchmarks.
>>
> Thanks!
>
> I looked through the results but I do not see anything that is very
> alarming. Some notes.
>
> o Hackbench with 16 groups on NPS1, that would likely be 640 tasks
>   communicating unless other paramters are used. I expect it to be
>   variable and it's a heavily overloaded scenario. Initial placement is
>   not necessarily critical as migrations are likely to be very high.
>   On NPS1, there is going to be random luck given that the latency
>   to individual CPUs and the physical topology is hidden.
I agree. On rerun, the numbers are quite close so I don't think it
is a concern currently.
> o NPS2 with 128 workers. That's at the threshold where load is
>   potentially evenly split between the two sockets but not perfectly
>   split due to migrate-on-wakeup being a little unpredictable. Might
>   be worth checking the variability there.

For schbench, following are the stats recorded for 128 workers:

Configuration: NPS2

- tip

Min           : 357.00
Max           : 407.00
Median        : 369.00
AMean         : 376.30
AMean Stddev  : 19.15
AMean CoefVar : 5.09 pct

- NUMA Bal

Min           : 384.00
Max           : 410.00
Median        : 400.50
AMean         : 400.40
AMean Stddev  : 8.36
AMean CoefVar : 2.09 pct


Configuration: NPS4

- tip

Min           : 361.00
Max           : 399.00
Median        : 377.00
AMean         : 377.00
AMean Stddev  : 10.31
AMean CoefVar : 2.73 pct

- NUMA Bal

Min           : 379.00
Max           : 394.00
Median        : 390.50
AMean         : 388.10
AMean Stddev  : 5.55
AMean CoefVar : 1.43 pct

In the above cases, the patched kernel seems to
be giving more stable results compared to the tip.
schbench is run 10 times for each worker count to
gather these statistics.

> o Same observations for tbench. I looked at my own results for NPS1
>   on Zen3 and what I see is that there is a small blip there but
>   the mpstat heat map indicates that the nodes are being more evenly
>   used than without the patch which is expected.
I agree. The task distribution should have improved with the patch.
Following are the stats recorded for the tbench run for 32 and 64
workers.

Configuration: NPS2

o 32 workers

- tip

Min           : 10250.10
Max           : 10721.90
Median        : 10651.00
AMean         : 10541.00
AMean Stddev  : 254.41
AMean CoefVar : 2.41 pct

- NUMA Bal

Min           : 8932.03
Max           : 10065.10
Median        : 9894.89
AMean         : 9630.67
AMean Stddev  : 611.00
AMean CoefVar : 6.34 pct

o 64 workers

- tip

Min           : 16197.20
Max           : 17175.90
Median        : 16291.20
AMean         : 16554.77
AMean Stddev  : 539.97
AMean CoefVar : 3.26 pct

- NUMA Bal

Min           : 14386.80
Max           : 16625.50
Median        : 16441.10
AMean         : 15817.80
AMean Stddev  : 1242.71
AMean CoefVar : 7.86 pct

We are observing tip to be more stable in this case.
tbench is run 3 times with for given worker count
to gather these statistics.

> o STREAM is interesting in that there are large differences between
>   10 runs and 100 hundred runs. In indicates that without pinning that
>   STREAM can be a bit variable. The problem might be similar to NAS
>   as reported in the leader mail with the variability due to commit
>   c6f886546cb8 for unknown reasons.
There are some cases of Stream where two Stream threads will be co-located
on the same LLC which results in performance drop. I suspect the
patch helps in such situation by getting a better balance much earlier.
>>>  kernel/sched/fair.c     | 59 ++++++++++++++++++++++++++---------------
>>>  kernel/sched/topology.c | 23 ++++++++++------
>>>  2 files changed, 53 insertions(+), 29 deletions(-)
>>>
>> Please let me know if you would like me to get some additional
>> data on the test system.
> Other than checking variability, the min, max and range, I don't need
> additional data. I suspect in some cases like what I observed with NAS
> that there is wide variability for reasons independent of this series.
I've inlined the data above.
> I'm of the opinion though that your results are not a barrier for
> merging. Do you agree?
The results overall look good and shouldn't be a barrier for merging.

Tested-by: K Prateek Nayak <kprateek.nayak@....com>

--
Thanks and Regards,
Prateek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ