lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140704035455.GA5524@localhost>
Date:	Fri, 4 Jul 2014 11:54:55 +0800
From:	Fengguang Wu <fengguang.wu@...el.com>
To:	Jet Chen <jet.chen@...el.com>
Cc:	Aaron Tomlin <atomlin@...hat.com>,
	Dave Hansen <dave.hansen@...el.com>, LKP <lkp@...org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [kernel/watchdog.c] ed235875e2c: -14.2% will-it-scale.scalability

Hi all,

Sorry please ignore this report: it seems there are no obvious
relationship between the code change and the regression.

Thanks,
Fengguang

On Fri, Jul 04, 2014 at 11:45:31AM +0800, Jet Chen wrote:
> Hi Aaron,
> 
> FYI, we noticed the below changes on
> 
> commit ed235875e2ca983197831337a986f0517074e1a0 ("kernel/watchdog.c: print traces for all cpus on lockup detection")
> 
> test case: lkp-snb01/will-it-scale/signal1
> 
> f3aca3d09525f87  ed235875e2ca983197831337a
> ---------------  -------------------------
>       0.12 ~ 0%     -14.2%       0.10 ~ 0%  TOTAL will-it-scale.scalability
>     506146 ~ 0%      -4.4%     484004 ~ 0%  TOTAL will-it-scale.per_process_ops
>      12193 ~ 4%     +12.6%      13726 ~ 6%  TOTAL slabinfo.kmalloc-256.active_objs
>      12921 ~ 4%     +12.3%      14516 ~ 5%  TOTAL slabinfo.kmalloc-256.num_objs
>     123094 ~ 3%      -6.5%     115117 ~ 3%  TOTAL meminfo.Committed_AS
> 
> Legend:
> 	~XX%    - stddev percent
> 	[+-]XX% - change percent
> 
> 
>                             will-it-scale.per_process_ops
> 
>   515000 ++-----------------------------------------------------------------+
>          |  .*.*.*.*.     .*                                                |
>   510000 *+*         *.*.*  + .*.                               *.          |
>   505000 ++                  *   *.*.*.*. .* .*.*.*. .*.*.*.   +  *.*.*.*.*.*
>          |                               *  *       *       *.*             |
>   500000 ++                                                                 |
>          |                                                                  |
>   495000 ++                                                                 |
>          |                                                                  |
>   490000 ++                                                                 |
>   485000 ++            O O             O O  O O                             |
>          |         O O     O O O O O O     O    O                           |
>   480000 O+O O O O                                O                         |
>          |                                                                  |
>   475000 ++-----------------------------------------------------------------+
> 
> 
> 	[*] bisect-good sample
> 	[O] bisect-bad  sample
> 
> 
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
> 
> Thanks,
> Jet
> 

> echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
> ./runtest.py signal1 25 1 8 16 24 32
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ