lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1423461513.5968.56.camel@intel.com>
Date:	Mon, 09 Feb 2015 13:58:33 +0800
From:	Huang Ying <ying.huang@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [sched/core] 9edfbfed3f5: +88.2%
 hackbench.time.involuntary_context_switches

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")


testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket

cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
   1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
  41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
       388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
     12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
     30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
      2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
   1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
      1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
     11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
     19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
      5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
      5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
     37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
    229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
     35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
      5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
     49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
     26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
        18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
      1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
        25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
      1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
      0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
       148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
       109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
      2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
       147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
       111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
       110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
       112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
       113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
    789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
     15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
   2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
   2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
    155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
      8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
   2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
   2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
     71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
      8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in

xps2: Nehalem
Memory: 4G

To reproduce:

        apt-get install ruby ruby-oj
        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/setup-local job.yaml # the job file attached in this email
        bin/run-local   job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying


View attachment "reproduce" of type "text/plain" (51 bytes)

_______________________________________________
LKP mailing list
LKP@...ux.intel.com

Download attachment "job.yaml" of type "application/x-yaml" (1685 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ