lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 8 Jun 2022 08:48:22 +0200
From:   Willy Tarreau <w@....eu>
To:     kernel test robot <oliver.sang@...el.com>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        Moshe Kol <moshe.kol@...l.huji.ac.il>,
        Yossi Gilad <yossi.gilad@...l.huji.ac.il>,
        Amit Klein <aksecurity@...il.com>,
        Eric Dumazet <edumazet@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>, netdev@...r.kernel.org,
        lkp@...ts.01.org, lkp@...el.com, ying.huang@...el.com,
        feng.tang@...el.com, zhengjun.xing@...ux.intel.com,
        fengwei.yin@...el.com
Subject: Re: [tcp]  e926147618:  stress-ng.icmp-flood.ops_per_sec -8.7%
 regression

On Wed, Jun 08, 2022 at 02:08:02PM +0800, kernel test robot wrote:
> 
> 
> Greeting,
> 
> FYI, we noticed a -8.7% regression of stress-ng.icmp-flood.ops_per_sec due to commit:
> 
> 
> commit: e9261476184be1abd486c9434164b2acbe0ed6c2 ("tcp: dynamically allocate the perturb table used by source ports")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> 
> in testcase: stress-ng
> on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
> with following parameters:
> 
> 	nr_threads: 100%
> 	testtime: 60s
> 	class: network
> 	test: icmp-flood
> 	cpufreq_governor: performance
> 	ucode: 0xd000331
> 
> 
> 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kernel test robot <oliver.sang@...el.com>
> 
> 
> Details are as below:
> -------------------------------------------------------------------------------------------------->
> 
> 
> To reproduce:
> 
>         git clone https://github.com/intel/lkp-tests.git
>         cd lkp-tests
>         sudo bin/lkp install job.yaml           # job file is attached in this email
>         bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
>         sudo bin/lkp run generated-yaml-file
> 
>         # if come across any failure that blocks the test,
>         # please remove ~/.lkp and /lkp dir to run from a clean state.
> 
> =========================================================================================
> class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
>   network/gcc-11/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/icmp-flood/stress-ng/60s/0xd000331
> 
> commit: 
>   ca7af04025 ("tcp: add small random increments to the source port")
>   e926147618 ("tcp: dynamically allocate the perturb table used by source ports")
> 
> ca7af0402550f9a0 e9261476184be1abd486c943416 
> ---------------- --------------------------- 
>          %stddev     %change         %stddev
>              \          |                \  
>  5.847e+08            -8.7%  5.337e+08        stress-ng.icmp-flood.ops
>    9745088            -8.7%    8894785        stress-ng.icmp-flood.ops_per_sec
(...)

I don't know much what to think about it, to be honest. We anticipated
a possible very tiny slowdown by moving the table from static to dynamic,
though that was not observed at all during extensive tests on real
hardware. But it was not acceptable to keep too large a table as static
anyway.

>     102391 ±  2%      -8.1%      94064        stress-ng.time.involuntary_context_switches
>       3069 ±  2%      -9.6%       2775 ±  4%  stress-ng.time.percent_of_cpu_this_job_got
>       1857 ±  2%      -9.3%       1685 ±  4%  stress-ng.time.system_time
>      47.67 ±  4%     -20.9%      37.70 ±  5%  stress-ng.time.user_time

Not sure what to think about these variations, nor how they may be related.

Thanks,
Willy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ