lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <530C0996.3070405@linaro.org>
Date:	Tue, 25 Feb 2014 11:10:14 +0800
From:	Alex Shi <alex.shi@...aro.org>
To:	Fengguang Wu <fengguang.wu@...el.com>
CC:	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [sched/balance] 7511dd0a7: +2.1e+05% context switches

On 02/25/2014 09:33 AM, Fengguang Wu wrote:
> On Thu, Feb 20, 2014 at 03:21:29PM +0800, Alex Shi wrote:
>> On 02/19/2014 09:00 PM, Fengguang Wu wrote:
>>> bc575710efe937e  7511dd0a73aaf2ca4bcd829f9
>>> ---------------  -------------------------
>>>       2029 ~ 0%    +222.9%       6551 ~17%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>>>     143678 ~42%  +4.8e+05%  6.927e+08 ~ 0%  lkp-snb01/micro/will-it-scale/sched_yield
>>>     145708 ~42%  +4.8e+05%  6.927e+08 ~ 0%  TOTAL time.involuntary_context_switches
>>
>>
>> Thanks for testing, Fengguang!
>>
>> Does the context switch increasing happen on whole patchset? or just
>> happens on this patch?
> 
> Only this patch. Some other patches actually reduce the context switches.
> This is all the changes for branch alexshi/topdown:

Thanks a lot for great testing!

Would you like sharing the every benchmarks run on each of testing
machine, then we not only know the benchmark change on the machine, but
also know what benchmark was not effected on the machine?

And it will be perfect if you like to give a URL for benchmarks which
you are using. :)
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>     111372 ~ 1%     +60.1%     178267 ~ 1%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>     111372 ~ 1%     +60.1%     178267 ~ 1%  TOTAL hackbench.throughput
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>   11862017 ~10%    +769.1%  1.031e+08 ~ 9%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>   11862017 ~10%    +769.1%  1.031e+08 ~ 9%  TOTAL numa-numastat.node0.numa_hit
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>   11862016 ~10%    +769.1%  1.031e+08 ~ 9%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>   11862016 ~10%    +769.1%  1.031e+08 ~ 9%  TOTAL numa-numastat.node0.local_node
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>    4629280 ~ 9%    +714.0%   37683303 ~ 9%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>    4629280 ~ 9%    +714.0%   37683303 ~ 9%  TOTAL proc-vmstat.pgalloc_dma32
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>    6476588 ~11%    +751.1%   55123483 ~ 7%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>    6476588 ~11%    +751.1%   55123483 ~ 7%  TOTAL numa-vmstat.node0.numa_hit
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>    6380486 ~11%    +761.9%   54995307 ~ 7%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>    6380486 ~11%    +761.9%   54995307 ~ 7%  TOTAL numa-vmstat.node0.numa_local
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>      24214 ~ 1%     -71.8%       6824 ~ 7%  lkp-a03/micro/will-it-scale/sched_yield
>      55173 ~ 0%     -76.9%      12720 ~ 0%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>      55292 ~ 0%     -78.9%      11653 ~ 3%  lkp-snb01/micro/will-it-scale/sched_yield
>     134680 ~ 0%     -76.8%      31198 ~ 3%  TOTAL interrupts.RES
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>      25891 ~ 1%     +99.5%      51664 ~ 0%  lkp-a03/micro/will-it-scale/sched_yield
>      25891 ~ 1%     +99.5%      51664 ~ 0%  TOTAL softirqs.SCHED
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>    4282244 ~ 1%     -46.6%    2284599 ~ 8%  lkp-ws02/micro/hackbench/1600%-threads-pipe
>    4282244 ~ 1%     -46.6%    2284599 ~ 8%  TOTAL proc-vmstat.pgfault
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>     105276 ~ 0%     -43.3%      59659 ~ 0%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>     105799 ~ 1%     -38.0%      65577 ~ 1%  lkp-snb01/micro/will-it-scale/sched_yield
>     211075 ~ 0%     -40.7%     125236 ~ 0%  TOTAL cpuidle.C7-SNB.usage
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>  4.174e+08 ~ 0%     +31.2%  5.478e+08 ~ 0%  lkp-a03/micro/will-it-scale/sched_yield
>  4.174e+08 ~ 0%     +31.2%  5.478e+08 ~ 0%  TOTAL cpuidle.C4.time
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>      20.64 ~ 0%     +23.9%      25.57 ~ 0%  lkp-snb01/micro/will-it-scale/sched_yield
>      20.64 ~ 0%     +23.9%      25.57 ~ 0%  TOTAL turbostat.%c1
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>     214235 ~ 0%     -13.4%     185581 ~ 0%  lkp-a03/micro/will-it-scale/sched_yield
>     214235 ~ 0%     -13.4%     185581 ~ 0%  TOTAL interrupts.LOC
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>     133373 ~ 0%     -10.8%     119020 ~ 0%  lkp-a03/micro/will-it-scale/sched_yield
>     133373 ~ 0%     -10.8%     119020 ~ 0%  TOTAL softirqs.TIMER
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>       1109 ~ 8%   +9292.8%     104250 ~ 1%  lkp-a03/micro/will-it-scale/sched_yield
>       4060 ~38%  +1.2e+05%    4868923 ~ 2%  lkp-snb01/micro/will-it-scale/sched_yield
>       5169 ~32%  +96094.8%    4973173 ~ 2%  TOTAL vmstat.system.cs
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>       2045 ~ 1%    +221.7%       6578 ~ 2%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>       2045 ~ 1%    +221.7%       6578 ~ 2%  TOTAL time.involuntary_context_switches
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>        800 ~ 0%     -18.8%        650 ~ 0%  lkp-a03/micro/will-it-scale/sched_yield
>        800 ~ 0%     -18.8%        650 ~ 0%  TOTAL vmstat.system.in
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>      50.56 ~ 0%      -1.8%      49.64 ~ 0%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>      50.55 ~ 0%      -9.4%      45.80 ~ 0%  lkp-snb01/micro/will-it-scale/sched_yield
>     101.11 ~ 0%      -5.6%      95.44 ~ 0%  TOTAL turbostat.%c0
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>       1518 ~ 0%      -4.9%       1443 ~ 0%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>       1518 ~ 0%      -4.9%       1443 ~ 0%  TOTAL time.user_time
> 
>       v3.14-rc2  d2f8e017fdd913fc150160508  
> ---------------  -------------------------  
>        488 ~ 0%      -4.8%        465 ~ 0%  lkp-snb01/micro/will-it-scale/pthread_mutex2
>        488 ~ 0%      -4.8%        465 ~ 0%  TOTAL time.percent_of_cpu_this_job_got
> 
> Thanks,
> Fengguang
> 


-- 
Thanks
    Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ