lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 26 Feb 2015 11:10:54 +0800
From:	Huang Ying <ying.huang@...el.com>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [vmstat] ba4877b9ca5: not primary result change, -62.5%
 will-it-scale.time.involuntary_context_switches

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit ba4877b9ca51f80b5d30f304a46762f0509e1635 ("vmstat: do not use deferrable delayed work for vmstat_update")

testbox/testcase/testparams: wsm/will-it-scale/performance-malloc1

9c0415eb8cbf0c8f  ba4877b9ca51f80b5d30f304a4  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
      1194 ±  0%     -62.5%        447 ±  7%  will-it-scale.time.involuntary_context_switches
       246 ±  0%      +2.3%        252 ±  1%  will-it-scale.time.system_time
  18001.54 ± 22%    -100.0%       0.00 ±  0%  sched_debug.cfs_rq[3]:/.MIN_vruntime
  18001.54 ± 22%    -100.0%       0.00 ±  0%  sched_debug.cfs_rq[3]:/.max_vruntime
   1097152 ±  3%     -82.4%     192865 ±  1%  cpuidle.C6-NHM.usage
     99560 ± 16%     +57.7%     157029 ± 23%  sched_debug.cfs_rq[8]:/.spread0
     27671 ± 23%     -65.9%       9439 ±  8%  sched_debug.cfs_rq[5]:/.exec_clock
      1194 ±  0%     -62.5%        447 ±  7%  time.involuntary_context_switches
    247334 ± 20%     -61.2%      96086 ±  3%  sched_debug.cfs_rq[5]:/.min_vruntime
     20417 ± 35%     -48.7%      10473 ±  8%  sched_debug.cfs_rq[3]:/.exec_clock
    104076 ± 38%     +73.9%     181000 ± 30%  sched_debug.cpu#2.ttwu_local
    180071 ± 29%     -41.3%     105641 ± 10%  sched_debug.cfs_rq[3]:/.min_vruntime
        34 ± 14%     -48.6%         17 ± 10%  sched_debug.cpu#5.cpu_load[4]
     43629 ± 18%     -32.7%      29370 ± 13%  sched_debug.cpu#3.nr_load_updates
     42653 ± 14%     -42.6%      24488 ± 14%  sched_debug.cpu#5.nr_load_updates
     13660 ±  9%     -41.4%       8010 ±  3%  sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
       296 ±  9%     -41.2%        174 ±  3%  sched_debug.cfs_rq[5]:/.tg_runnable_contrib
    205846 ±  6%     -11.2%     182783 ±  6%  sched_debug.cpu#7.sched_count
        37 ± 10%     -38.4%         23 ±  8%  sched_debug.cpu#5.cpu_load[3]
      1378 ± 12%     -20.6%       1094 ±  4%  sched_debug.cpu#11.ttwu_local
    205691 ±  6%     -11.2%     182623 ±  6%  sched_debug.cpu#7.nr_switches
    102423 ±  6%     -11.2%      90915 ±  6%  sched_debug.cpu#7.sched_goidle
        25 ± 21%     +41.6%         35 ± 17%  sched_debug.cpu#3.cpu_load[0]
        68 ± 16%     -29.3%         48 ±  9%  sched_debug.cpu#8.cpu_load[0]
        32 ± 14%     +54.2%         50 ±  6%  sched_debug.cpu#11.cpu_load[4]
       507 ± 10%     -30.0%        355 ±  3%  sched_debug.cfs_rq[10]:/.blocked_load_avg
     39084 ± 16%     +48.0%      57862 ±  2%  sched_debug.cfs_rq[11]:/.exec_clock
  10022712 ±  9%     -28.8%    7139491 ± 13%  cpuidle.C1-NHM.time
    341246 ± 14%     +47.3%     502560 ±  6%  sched_debug.cfs_rq[11]:/.min_vruntime
       562 ±  9%     -28.8%        400 ±  4%  sched_debug.cfs_rq[10]:/.tg_load_contrib
        66 ±  7%     -20.8%         52 ± 14%  sched_debug.cfs_rq[8]:/.runnable_load_avg
        36 ± 18%     +45.8%         52 ±  6%  sched_debug.cpu#11.cpu_load[3]
     43079 ±  1%      +8.0%      46513 ±  2%  softirqs.RCU
        43 ±  9%     -25.6%         32 ± 10%  sched_debug.cpu#5.cpu_load[2]
   1745173 ±  4%     +43.2%    2499517 ±  3%  cpuidle.C3-NHM.usage
        44 ± 18%     +25.3%         55 ± 10%  sched_debug.cpu#9.cpu_load[2]
     64453 ±  8%     +27.0%      81824 ±  3%  sched_debug.cpu#11.nr_load_updates
     58719 ±  7%     -14.3%      50299 ±  9%  sched_debug.cpu#0.ttwu_count
        40 ± 16%     +24.7%         50 ±  3%  sched_debug.cpu#9.cpu_load[4]
        42 ± 16%     +26.2%         53 ±  5%  sched_debug.cpu#9.cpu_load[3]
     61887 ±  4%     -16.2%      51890 ± 11%  sched_debug.cpu#0.sched_goidle
    125652 ±  4%     -16.1%     105434 ± 10%  sched_debug.cpu#0.nr_switches
    125769 ±  4%     -16.1%     105564 ± 10%  sched_debug.cpu#0.sched_count
     16164 ±  7%     +35.2%      21852 ±  1%  sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
       352 ±  7%     +34.9%        475 ±  1%  sched_debug.cfs_rq[11]:/.tg_runnable_contrib
      1442 ± 11%     +20.9%       1742 ±  3%  sched_debug.cpu#11.curr->pid
 7.243e+08 ±  1%     +20.0%   8.69e+08 ±  3%  cpuidle.C3-NHM.time
    172138 ±  5%     +11.9%     192649 ±  6%  sched_debug.cpu#9.sched_count
     85576 ±  5%     +12.0%      95879 ±  6%  sched_debug.cpu#9.sched_goidle
     91826 ±  0%     +13.0%     103784 ± 11%  sched_debug.cfs_rq[6]:/.exec_clock
     46977 ± 15%     +21.8%      57227 ±  2%  sched_debug.cfs_rq[9]:/.exec_clock
    115370 ±  1%     +11.5%     128602 ±  8%  sched_debug.cpu#6.nr_load_updates
     67629 ± 10%     +19.7%      80928 ±  0%  sched_debug.cpu#9.nr_load_updates
      0.92 ±  4%      +9.2%       1.00 ±  3%  perf-profile.cpu-cycles.__vma_link_rb.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff
      0.89 ±  3%      +9.5%       0.98 ±  5%  perf-profile.cpu-cycles._cond_resched.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
     17.84 ±  3%      -7.2%      16.56 ±  1%  turbostat.CPU%c6
     10197 ±  0%      +2.5%      10455 ±  1%  vmstat.system.in

testbox/testcase/testparams: lkp-sb03/will-it-scale/malloc1

9c0415eb8cbf0c8f  ba4877b9ca51f80b5d30f304a4  
----------------  --------------------------  
      2585 ±  2%     -69.2%        797 ±  8%  will-it-scale.time.involuntary_context_switches
     78369 ± 36%    +156.1%     200708 ± 19%  cpuidle.C3-SNB.usage
     95820 ± 11%     +60.9%     154175 ± 19%  sched_debug.cfs_rq[28]:/.spread0
     95549 ± 10%     +61.3%     154133 ± 20%  sched_debug.cfs_rq[26]:/.spread0
     95600 ± 10%     +60.3%     153220 ± 19%  sched_debug.cfs_rq[29]:/.spread0
     97285 ±  8%     +57.9%     153634 ± 19%  sched_debug.cfs_rq[31]:/.spread0
    254274 ± 29%     +39.0%     353345 ±  7%  sched_debug.cfs_rq[20]:/.spread0
    297854 ±  3%     +18.5%     353038 ±  8%  sched_debug.cfs_rq[22]:/.spread0
    298185 ±  2%     +18.1%     352124 ±  8%  sched_debug.cfs_rq[17]:/.spread0
    296875 ±  3%     +19.4%     354400 ±  7%  sched_debug.cfs_rq[18]:/.spread0
    297800 ±  3%     +18.5%     352927 ±  7%  sched_debug.cfs_rq[21]:/.spread0
      0.00 ±  8%    +142.4%       0.00 ± 33%  sched_debug.rt_rq[8]:/.rt_time
      2585 ±  2%     -69.2%        797 ±  8%  time.involuntary_context_switches
  29637066 ± 30%    +101.3%   59653820 ± 24%  cpuidle.C3-SNB.time
        40 ± 43%    +105.5%         83 ± 14%  sched_debug.cpu#0.cpu_load[4]
        11 ± 26%     +91.5%         22 ±  4%  sched_debug.cfs_rq[7]:/.runnable_load_avg
        39 ± 40%    +104.5%         79 ± 13%  sched_debug.cpu#0.cpu_load[3]
       531 ± 10%     +75.1%        930 ± 44%  sched_debug.cpu#26.ttwu_local
        36 ± 34%     +91.1%         69 ± 12%  sched_debug.cpu#0.cpu_load[2]
     95262 ± 11%     +60.9%     153293 ± 18%  sched_debug.cfs_rq[27]:/.spread0
       120 ± 19%     -53.7%         55 ± 42%  sched_debug.cfs_rq[17]:/.tg_load_contrib
    278957 ± 26%     +57.1%     438311 ± 17%  cpuidle.C1E-SNB.usage
        29 ± 30%     +62.7%         48 ± 18%  sched_debug.cfs_rq[0]:/.load
        33 ± 27%     +66.7%         56 ± 10%  sched_debug.cpu#0.cpu_load[1]
        68 ± 23%     -32.2%         46 ± 18%  sched_debug.cpu#16.load
       295 ±  9%     +46.9%        434 ± 28%  sched_debug.cpu#17.ttwu_local
        16 ± 41%     +95.3%         31 ± 36%  sched_debug.cpu#7.load
        42 ± 20%     -32.2%         29 ± 16%  sched_debug.cpu#21.cpu_load[0]
     50555 ± 17%     -30.4%      35165 ±  3%  sched_debug.cpu#26.sched_count
        19 ± 25%     -24.7%         14 ± 14%  sched_debug.cpu#29.cpu_load[1]
     24874 ± 18%     -30.9%      17181 ±  5%  sched_debug.cpu#26.sched_goidle
     50298 ± 17%     -30.3%      35047 ±  3%  sched_debug.cpu#26.nr_switches
  34788152 ± 26%     +49.5%   52019925 ± 15%  cpuidle.C1E-SNB.time
         8 ± 37%     +87.5%         15 ± 12%  sched_debug.cpu#8.cpu_load[2]
     93498 ±  4%     +11.4%     104199 ±  7%  softirqs.RCU
        28 ± 24%     +44.2%         40 ± 12%  sched_debug.cfs_rq[0]:/.runnable_load_avg
      3508 ±  5%     +21.1%       4247 ± 11%  numa-vmstat.node1.nr_anon_pages
     14073 ±  6%     +20.8%      16993 ± 11%  numa-meminfo.node1.AnonPages
         5 ± 15%     +45.5%          8 ±  8%  sched_debug.cpu#8.cpu_load[4]
      1651 ± 16%     +54.6%       2554 ± 29%  sched_debug.cpu#1.ttwu_local
        35 ± 28%     +36.9%         48 ± 17%  sched_debug.cpu#0.cpu_load[0]
       173 ± 12%     -17.7%        142 ±  4%  sched_debug.cfs_rq[14]:/.tg_runnable_contrib
     25918 ± 19%     -26.8%      18974 ±  2%  sched_debug.cpu#26.ttwu_count
      8010 ± 12%     -17.8%       6582 ±  4%  sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
         6 ± 25%     +65.4%         10 ± 12%  sched_debug.cpu#8.cpu_load[3]
     15670 ± 10%     +14.3%      17912 ±  9%  numa-vmstat.node1.numa_other
    297389 ±  3%     +22.0%     362854 ± 11%  sched_debug.cfs_rq[23]:/.spread0
    297771 ±  3%     +18.8%     353825 ±  8%  sched_debug.cfs_rq[19]:/.spread0
      6713 ±  3%     +10.3%       7405 ±  4%  sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
       145 ±  3%     +10.1%        160 ±  4%  sched_debug.cfs_rq[11]:/.tg_runnable_contrib
      2566 ±  7%      -9.6%       2319 ±  5%  sched_debug.cpu#21.curr->pid
      4694 ± 10%     +14.4%       5368 ±  6%  sched_debug.cpu#0.ttwu_local
        37 ±  8%     -19.9%         30 ± 14%  sched_debug.cpu#21.cpu_load[1]
     33072 ± 10%     -19.5%      26612 ±  9%  sched_debug.cpu#11.nr_switches
     16783 ±  8%     -20.1%      13407 ± 14%  numa-meminfo.node0.AnonPages
      4198 ±  7%     -19.9%       3365 ± 14%  numa-vmstat.node0.nr_anon_pages
      3458 ±  7%      -9.8%       3120 ±  1%  sched_debug.cfs_rq[30]:/.tg_load_avg
      3451 ±  7%      -9.4%       3126 ±  2%  sched_debug.cfs_rq[31]:/.tg_load_avg
     23550 ±  1%     -25.1%      17646 ± 19%  sched_debug.cpu#28.sched_goidle
      3468 ±  7%      -9.1%       3154 ±  1%  sched_debug.cfs_rq[29]:/.tg_load_avg
      1493 ± 11%     +22.2%       1823 ±  8%  sched_debug.cpu#2.curr->pid
     38654 ±  6%     -10.1%      34735 ±  4%  sched_debug.cpu#14.nr_load_updates
     16449 ±  8%     -15.7%      13867 ±  8%  sched_debug.cpu#11.ttwu_count
     47593 ±  1%     -23.4%      36466 ± 21%  sched_debug.cpu#28.nr_switches
      6164 ±  1%      +8.5%       6687 ±  4%  sched_debug.cfs_rq[12]:/.exec_clock

testbox/testcase/testparams: lkp-sbx04/will-it-scale/performance-malloc1

9c0415eb8cbf0c8f  ba4877b9ca51f80b5d30f304a4  
----------------  --------------------------  
      4389 ±  2%     -66.0%       1494 ±  0%  will-it-scale.time.involuntary_context_switches
     37594 ± 32%    +542.8%     241666 ±  9%  cpuidle.C3-SNB.usage
        12 ± 38%     -60.4%          4 ± 27%  sched_debug.cpu#56.load
     73932 ± 14%     -48.3%      38186 ± 43%  sched_debug.cpu#7.ttwu_count
         2 ±  0%    +175.0%          5 ± 47%  sched_debug.cpu#11.cpu_load[2]
        23 ± 43%    +206.5%         70 ± 39%  sched_debug.cfs_rq[55]:/.blocked_load_avg
      4389 ±  2%     -66.0%       1494 ±  0%  time.involuntary_context_switches
        73 ± 44%     -53.7%         34 ± 29%  sched_debug.cfs_rq[33]:/.tg_load_contrib
        14 ± 29%    +125.9%         32 ± 37%  sched_debug.cpu#45.load
 1.324e+08 ± 29%     -63.7%   48101669 ± 16%  cpuidle.C1-SNB.time
  34290260 ±  6%    +165.5%   91052161 ± 14%  cpuidle.C3-SNB.time
        12 ± 25%     +78.0%         22 ± 14%  sched_debug.cpu#0.cpu_load[4]
         2 ± 19%     -55.6%          1 ±  0%  sched_debug.cfs_rq[54]:/.nr_spread_over
        12 ±  0%    +145.8%         29 ± 46%  sched_debug.cfs_rq[45]:/.load
      5215 ± 18%     -55.2%       2334 ± 22%  numa-vmstat.node2.nr_active_anon
     20854 ± 18%     -55.3%       9329 ± 22%  numa-meminfo.node2.Active(anon)
       316 ± 17%     +68.0%        531 ± 25%  sched_debug.cpu#62.ttwu_local
       176 ± 10%     +54.4%        272 ± 21%  sched_debug.cpu#39.ttwu_local
    157060 ± 19%     -48.4%      81039 ± 39%  sched_debug.cpu#7.sched_count
    171170 ± 34%     +62.8%     278733 ± 11%  cpuidle.C1E-SNB.usage
      0.00 ± 10%     +41.8%       0.00 ± 19%  sched_debug.rt_rq[36]:/.rt_time
    243909 ± 31%     +72.6%     421059 ±  5%  sched_debug.cfs_rq[51]:/.spread0
        12 ± 25%     +27.1%         15 ± 21%  sched_debug.cpu#0.cpu_load[1]
    143112 ± 14%     -46.3%      76834 ± 44%  sched_debug.cpu#7.nr_switches
     71413 ± 14%     -46.3%      38314 ± 44%  sched_debug.cpu#7.sched_goidle
        13 ± 12%     +41.5%         18 ± 23%  sched_debug.cpu#46.cpu_load[0]
      1024 ± 27%     -27.2%        745 ± 26%  sched_debug.cpu#15.ttwu_local
      1061 ±  9%     -34.8%        692 ±  2%  sched_debug.cpu#30.curr->pid
       744 ±  8%     +43.5%       1068 ± 18%  sched_debug.cpu#20.curr->pid
      0.00 ± 24%     +76.1%       0.00 ± 14%  sched_debug.rt_rq[16]:/.rt_time
       308 ± 11%     +79.2%        552 ± 35%  sched_debug.cpu#57.ttwu_local
     28950 ± 29%     -37.0%      18242 ± 16%  sched_debug.cpu#23.sched_count
     14117 ± 17%     +55.5%      21946 ± 17%  sched_debug.cpu#13.sched_goidle
     13969 ± 16%     +59.1%      22223 ± 18%  sched_debug.cpu#13.ttwu_count
     28524 ± 16%     +54.6%      44106 ± 17%  sched_debug.cpu#13.nr_switches
      3587 ± 12%     -22.7%       2774 ± 18%  numa-vmstat.node2.nr_slab_reclaimable
     14352 ± 12%     -22.7%      11099 ± 18%  numa-meminfo.node2.SReclaimable
     29903 ±  7%     +29.5%      38737 ± 14%  numa-meminfo.node1.Active
  91841976 ± 13%     -27.9%   66180100 ± 13%  cpuidle.C1E-SNB.time
        76 ± 11%     +34.1%        102 ± 24%  sched_debug.cfs_rq[40]:/.tg_load_contrib
       745 ± 14%     +15.8%        863 ± 18%  sched_debug.cpu#31.curr->pid
     42244 ±  9%     -27.8%      30503 ±  8%  numa-meminfo.node2.Active
     28600 ±  2%     +25.5%      35889 ± 12%  numa-meminfo.node0.Active
       284 ± 17%     +30.5%        371 ±  1%  sched_debug.cpu#44.ttwu_local
    655478 ± 13%     -20.0%     524404 ±  3%  sched_debug.cfs_rq[0]:/.min_vruntime
     42280 ±  2%     -23.1%      32510 ± 14%  sched_debug.cpu#45.ttwu_count
       290 ±  7%     +25.9%        365 ± 10%  sched_debug.cpu#50.ttwu_local
     83350 ±  2%     -23.2%      64039 ± 15%  sched_debug.cpu#45.nr_switches
     41131 ±  3%     -22.4%      31900 ± 15%  sched_debug.cpu#45.sched_goidle
     83731 ±  2%     -23.1%      64394 ± 15%  sched_debug.cpu#45.sched_count
       317 ± 17%     +25.5%        398 ± 11%  sched_debug.cpu#52.ttwu_local
       264 ±  6%     +53.6%        406 ± 36%  sched_debug.cpu#46.ttwu_local
     41799 ±  7%     -13.2%      36279 ± 13%  sched_debug.cpu#51.nr_switches
     42064 ±  7%     -13.1%      36535 ± 13%  sched_debug.cpu#51.sched_count
     12557 ± 27%     +56.5%      19654 ± 27%  sched_debug.cpu#57.sched_count
     10442 ±  6%     -12.3%       9152 ±  8%  sched_debug.cfs_rq[7]:/.exec_clock
     56292 ±  7%     -15.4%      47608 ± 13%  sched_debug.cpu#7.nr_load_updates
      1174 ± 18%     +45.2%       1704 ± 11%  sched_debug.cpu#11.curr->pid
       286 ± 13%     +32.5%        379 ±  9%  sched_debug.cpu#55.ttwu_local
    288745 ± 30%     +45.7%     420730 ±  5%  sched_debug.cfs_rq[53]:/.spread0
    287389 ± 30%     +46.5%     420927 ±  5%  sched_debug.cfs_rq[52]:/.spread0
      2584 ±  3%     +11.2%       2872 ±  6%  sched_debug.cpu#45.curr->pid
    289910 ± 31%     +45.7%     422398 ±  5%  sched_debug.cfs_rq[54]:/.spread0
    293040 ± 31%     +42.7%     418044 ±  4%  sched_debug.cfs_rq[49]:/.spread0
     35054 ±  5%      -9.1%      31878 ±  7%  sched_debug.cpu#30.nr_load_updates
     37803 ± 10%     +12.3%      42455 ±  5%  sched_debug.cpu#43.sched_goidle
     99686 ±  4%      -6.0%      93667 ±  5%  sched_debug.cpu#38.nr_load_updates
     39264 ±  6%     +12.8%      44305 ±  4%  sched_debug.cpu#43.ttwu_count
      3884 ± 16%     -14.7%       3311 ±  2%  sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum

testbox/testcase/testparams: xps2/pigz/performance-100%-512K

9c0415eb8cbf0c8f  ba4877b9ca51f80b5d30f304a4  
----------------  --------------------------  
     26318 ±  1%      -4.7%      25068 ±  3%  pigz.time.maximum_resident_set_size
         1 ±  0%    -100.0%          0 ±  0%  sched_debug.cfs_rq[0]:/.nr_running
      1706 ±  7%     -59.5%        691 ± 15%  sched_debug.cpu#6.sched_goidle
      1.13 ± 38%     -51.1%       0.55 ± 40%  perf-profile.cpu-cycles.copy_process.part.26.do_fork.sys_clone.stub_clone
      1.18 ± 32%     -48.9%       0.60 ± 39%  perf-profile.cpu-cycles.sys_clone.stub_clone
        11 ±  4%     -56.5%          5 ± 42%  sched_debug.cfs_rq[3]:/.nr_spread_over
      1.18 ± 32%     -48.9%       0.60 ± 39%  perf-profile.cpu-cycles.stub_clone
      1.18 ± 32%     -48.9%       0.60 ± 39%  perf-profile.cpu-cycles.do_fork.sys_clone.stub_clone
      1.63 ± 27%     -50.3%       0.81 ± 24%  perf-profile.cpu-cycles.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      0.00 ± 19%     -52.3%       0.00 ± 49%  sched_debug.rt_rq[1]:/.rt_time
      1.88 ± 15%     -32.3%       1.27 ± 17%  perf-profile.cpu-cycles.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      5059 ± 16%     -45.2%       2773 ± 39%  sched_debug.cpu#3.sched_goidle
       138 ±  2%      -8.2%        126 ±  4%  sched_debug.cpu#2.cpu_load[1]
       126 ±  6%     -12.5%        110 ±  2%  sched_debug.cpu#7.load
        14 ±  7%     -41.1%          8 ± 34%  sched_debug.cfs_rq[4]:/.nr_spread_over
       121 ±  2%     +15.0%        139 ±  3%  sched_debug.cfs_rq[1]:/.load
       122 ±  3%     +14.5%        139 ±  3%  sched_debug.cpu#1.load
       320 ± 42%    +113.6%        683 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
       351 ±  1%     +23.3%        433 ±  4%  cpuidle.C3-NHM.usage
      1.39 ±  3%     -19.6%       1.12 ±  3%  perf-profile.cpu-cycles.ret_from_fork
      1.62 ±  3%     -28.1%       1.17 ± 25%  perf-profile.cpu-cycles.__do_page_fault.do_page_fault.page_fault
      1.62 ±  3%     -26.5%       1.19 ± 27%  perf-profile.cpu-cycles.do_page_fault.page_fault
      1.77 ±  6%     -20.3%       1.41 ± 17%  perf-profile.cpu-cycles.page_fault
      1.52 ±  2%     -31.6%       1.04 ± 24%  perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
      1.34 ±  0%     -18.5%       1.09 ±  7%  perf-profile.cpu-cycles.kthread.ret_from_fork
       126 ±  6%     -12.5%        110 ±  2%  sched_debug.cfs_rq[7]:/.load
     15.23 ±  2%     -13.7%      13.15 ±  3%  perf-profile.cpu-cycles.copy_page_to_iter.pipe_read.new_sync_read.__vfs_read.vfs_read
       126 ±  3%     +19.2%        150 ±  2%  sched_debug.cfs_rq[3]:/.load
       126 ±  3%     +19.2%        150 ±  2%  sched_debug.cpu#3.load
     14.38 ±  2%     -12.4%      12.60 ±  5%  perf-profile.cpu-cycles.copy_user_generic_string.copy_page_to_iter.pipe_read.new_sync_read.__vfs_read

xps2: Nehalem
Memory: 4G

wsm: Westmere
Memory: 6G

lkp-sb03: Sandy Bridge-EP
Memory: 64G

lkp-sbx04: Sandy Bridge-EX
Memory: 64G



                         time.involuntary_context_switches

  1300 ++-------------------------------------------------------------------+
  1200 **      *.* .*          **. *.***. *. *   *. **.    *.     *.* *. *.*|
       | +  .* :  *  * .**. *.*   *      *  * *.*  *   **. : ***.*   *  *   *
  1100 ++ **  *       *    *                              *                 |
  1000 ++                                                                   |
       |                                                                    |
   900 ++                                                                   |
   800 ++                                                                   |
   700 ++                                                                   |
       |                                                                    |
   600 ++                                                                   |
   500 ++                            O O               O                    |
       OO OO OOO OO O O OO OO OOO OO  O  OO OOO OO O O                      |
   400 ++            O                              O                       |
   300 ++-------------------------------------------------------------------+


                                  cpuidle.C3-NHM.time

  9.5e+08 ++----------------------------------------------------------------+
          |                       O                                         |
    9e+08 ++                                          O                     |
          |          O       O  O           O O                             |
          O   OO OOO  OO  O O O  O  OOO  O   O  OOO  O                      |
  8.5e+08 +O O           O              O O         O   O                   |
          |                                                                 |
    8e+08 ++                                                                |
          |                                                                 |
  7.5e+08 ++                                                                |
          |             .* .* *. *  *  .**      **                .*    *   *
          |   **.*   ***  *  *  * :+ **   *.** +  :.* *.  *.**.* *  * .* *.*|
    7e+08 *+.*    * +             *           *   *  *  **      *    *      |
          |*       *                                                        |
  6.5e+08 ++----------------------------------------------------------------+


                                  cpuidle.C6-NHM.time

   1.6e+09 ++---------------------------------------------------------------+
           |     .* *                       *                        *     *|
  1.55e+09 **.* * : :+  *.*   * *  *       * :    .***. **.*  .**    :+   ::|
           |   ::  :  **   : : ::+ ::.**   : *.***     *    :*   :  *  *  : |
           |   *   *       * : *  * *   *.*                 *    *.*    **  |
   1.5e+09 ++               *                                               *
           |                                                                |
  1.45e+09 ++                                                               |
           |            O                           O                       |
   1.4e+09 OO O O  O      O           OOO OO     O O O  O                   |
           |   O  O O OO   OO OOO O O       O  O                            |
           |                       O         O  O                           |
  1.35e+09 ++                                          O                    |
           |                                                                |
   1.3e+09 ++---------------------------------------------------------------+


                                 cpuidle.C6-NHM.usage

  1.4e+06 ++----------------------------------------------------*-----------+
          **.                             *. *    *.     *      :           |
  1.2e+06 ++ **   **. **  *.      *. **. *  * *   : ***.* :.*  : :  **.**   |
          |    *.*   *  + : * *.**  *   *      + *        *  *.* *.*     :.**
    1e+06 ++             *   *                  *                        *  |
          |                                                                 |
   800000 ++                                                                |
          |                                                                 |
   600000 ++                                                                |
          |                                                                 |
   400000 ++                                                                |
          |                                                                 |
   200000 OO OOO OOO OOO OO OOO OOO OOO OOO OOO OOO OOO O                   |
          |                                                                 |
        0 ++----------------------------------------------------------------+


                  will-it-scale.time.involuntary_context_switches

  1300 ++-------------------------------------------------------------------+
  1200 **      *.* .*          **. *.***. *. *   *. **.    *.     *.* *. *.*|
       | +  .* :  *  * .**. *.*   *      *  * *.*  *   **. : ***.*   *  *   *
  1100 ++ **  *       *    *                              *                 |
  1000 ++                                                                   |
       |                                                                    |
   900 ++                                                                   |
   800 ++                                                                   |
   700 ++                                                                   |
       |                                                                    |
   600 ++                                                                   |
   500 ++                            O O               O                    |
       OO OO OOO OO O O OO OO OOO OO  O  OO OOO OO O O                      |
   400 ++            O                              O                       |
   300 ++-------------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

	apt-get install ruby
	git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
	cd lkp-tests
	bin/setup-local job.yaml # the job file attached in this email
	bin/run-local   job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang


View attachment "job.yaml" of type "text/plain" (1629 bytes)

View attachment "reproduce" of type "text/plain" (38 bytes)

_______________________________________________
LKP mailing list
LKP@...ux.intel.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ