lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190805115648.GA19906@splinter>
Date:   Mon, 5 Aug 2019 14:56:48 +0300
From:   Ido Schimmel <idosch@...sch.org>
To:     kernel test robot <rong.a.chen@...el.com>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, nhorman@...driver.com,
        dsahern@...il.com, roopa@...ulusnetworks.com,
        nikolay@...ulusnetworks.com, jakub.kicinski@...ronome.com,
        toke@...hat.com, andy@...yhouse.net, f.fainelli@...il.com,
        andrew@...n.ch, vivien.didelot@...il.com, mlxsw@...lanox.com,
        Ido Schimmel <idosch@...lanox.com>, lkp@...org
Subject: Re: [drop_monitor]  98ffbd6cd2:  will-it-scale.per_thread_ops -17.5%
 regression

On Mon, Jul 29, 2019 at 05:52:13PM +0800, kernel test robot wrote:
> Greeting,
> 
> FYI, we noticed a -17.5% regression of will-it-scale.per_thread_ops due to commit:
> 
> 
> commit: 98ffbd6cd2b25fc6cbb0695e03b4fd43b5e116e6 ("[RFC PATCH net-next 10/12] drop_monitor: Add packet alert mode")
> url: https://github.com/0day-ci/linux/commits/Ido-Schimmel/drop_monitor-Capture-dropped-packets-and-metadata/20190723-135834
> 
> 
> in testcase: will-it-scale
> on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
> with following parameters:
> 
> 	nr_task: 100%
> 	mode: thread
> 	test: lock1
> 	cpufreq_governor: performance

Hi,

Thanks for the report. The test ('lock1') has nothing to do with the
networking subsystem and the commit that you cite is not doing anything
unless you're actually running drop monitor in this newly introduced
mode. I assume you're not running drop monitor at all? Therefore, these
results seem very strange to me.

The only thing I could think of to explain this is that somehow the
addition of 'struct sk_buff_head' to the per-CPU variable might have
affected alignment elsewhere.

I used your kernel config on my system and tried to run the test like
you did [1][2]. Did not get conclusive results [3]. Took measurements on
vanilla net-next and with my entire patchset applied (with some changes
since RFC).

If you look at the operations per seconds in the 'threads' column when
there are 4 tasks you can see that the average before my patchset is
2325577, while the average after is 2340328.

Do you see anything obviously wrong in how I ran the test? If not, in
your experience, how reliable are your results? I found a similar report
[4] that did not make a lot of sense as well.

Thanks

[1]
#!/bin/bash

for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
do
        online_file="$cpu_dir"/online
        [ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue

        file="$cpu_dir"/cpufreq/scaling_governor
        [ -f "$file" ] && echo "performance" > "$file"
done

[2]
# ./runtest.py lock1

[3]
before1.csv

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,610132,74.98,594558,74.40,610132
2,1230153,49.95,1184090,49.95,1220264
3,1844832,24.92,1758502,25.07,1830396
4,2454858,0.20,2311086,0.18,2440528

before2.csv

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,607417,74.92,584035,75.03,607417
2,1227674,50.02,1170271,50.05,1214834
3,1846440,24.91,1761115,25.03,1822251
4,2482559,0.23,2343761,0.20,2429668

before3.csv

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,609516,74.96,594691,74.85,609516
2,1231126,49.82,1176170,50.07,1219032
3,1858004,24.93,1761192,25.06,1828548
4,2460096,0.29,2321886,0.20,2438064

after1.csv 

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,623846,75.01,598565,75.01,623846
2,1237010,50.01,1163000,50.06,1247692
3,1858541,24.99,1752192,24.98,1871538
4,2477562,0.20,2338462,0.20,2495384

after2.csv

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,624175,74.98,593229,60.28,624175
2,1237561,45.43,1168572,50.01,1248350
3,1850211,25.03,1744378,24.90,1872525
4,2481224,0.20,2335768,0.20,2496700

after3.csv

tasks,processes,processes_idle,threads,threads_idle,linear
0,0,100,0,100,0
1,617805,74.99,590419,75.02,617805
2,1230908,50.01,1158534,50.06,1235610
3,1851623,25.06,1728419,24.94,1853415
4,2470115,0.20,2346754,0.20,2471220

[4] https://lkml.org/lkml/2019/2/19/351

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ