lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150209083120.GL5029@twins.programming.kicks-ass.net>
Date:	Mon, 9 Feb 2015 09:31:20 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Huang Ying <ying.huang@...el.com>
Cc:	Ingo Molnar <mingo@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: Re: [LKP] [sched/core] 9edfbfed3f5: +88.2%
 hackbench.time.involuntary_context_switches

On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
> 
> 
> testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
> 
> cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
> ----------------  --------------------------  
>          %stddev     %change         %stddev
>              \          |                \  
>    1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
>   41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
>        388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
>      12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
>      30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
>       2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
>       2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
>    1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
>       1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
>      11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
>      19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
>       5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
>       5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
>      37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
>     229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
>      35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
>       5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
>       5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
>      49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
>      26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
>         18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
>       1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
>         25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
>       1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
>       0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
>        148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
>        109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
>       2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
>        147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
>        111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
>        110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
>        112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
>        113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
>     789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
>      15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
>    2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
>    2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
>     155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
>       8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
>    2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
>    2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
>      71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
>       8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in
> 

OK, so the interesting number is total runtime; I cannot find it.
Therefore I cannot say what if anything changed. This is just a bunch of
random numbers afaict.

> To reproduce:
> 
>         apt-get install ruby ruby-oj

Mostly likely, you requiring ruby just means I will not be reproducing.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ