lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <528F4A78.1080505@linaro.org>
Date:	Fri, 22 Nov 2013 13:13:44 +0100
From:	Daniel Lezcano <daniel.lezcano@...aro.org>
To:	Alex Shi <alex.shi@...aro.org>, mingo@...hat.com,
	peterz@...radead.org, morten.rasmussen@....com,
	vincent.guittot@...aro.org, fweisbec@...il.com,
	linux@....linux.org.uk, tony.luck@...el.com, fenghua.yu@...el.com,
	tglx@...utronix.de, akpm@...ux-foundation.org,
	arjan@...ux.intel.com, pjt@...gle.com, fengguang.wu@...el.com
CC:	james.hogan@...tec.com, jason.low2@...com,
	gregkh@...uxfoundation.org, hanjun.guo@...aro.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/4] sched: remove cpu_load decay.

On 11/22/2013 07:37 AM, Alex Shi wrote:
> The cpu_load decays on time according past cpu load of rq. New sched_avg decays on tasks' load of time. Now we has 2 kind decay for cpu_load. That is a kind of redundancy. And increase the system load in sched_tick etc.
>
> This patch trying to remove the cpu_load decay. And fixed a nohz_full bug by the way.
>
> There are 5 load_idx used for cpu_load in sched_domain. busy_idx and idle_idx are not zero usually, but newidle_idx, wake_idx and forkexec_idx are all zero on every arch. A shortcut to remove cpu_Load decay in the first patch. just one line patch for this change. :)
>
> I have tested the patchset on my pandaES board, 2 cores ARM Cortex A9.
> hackbench thread/pipe performance increased nearly 10% with this patchset! That do surprise me!
>
> 	latest kernel 527d1511310a89		+ this patchset
> hackbench -T -g 10 -f 40
> 	23.25"					21.7"
> 	23.16"					19.99"
> 	24.24"					21.53"
> hackbench -p -g 10 -f 40
> 	26.52"					22.48"
> 	23.89"					24.00"
> 	25.65"					23.06"
> hackbench -P -g 10 -f 40
> 	20.14"					19.37"
> 	19.96"					19.76"
> 	21.76"					21.54"
>
> The git tree for this patchset at:
>   git@...hub.com:alexshi/power-scheduling.git no-load-idx
> Since Fengguang had included this tree into his kernel testing system. and I haven't get a regression report until now. I suppose it is fine for x86 system.
>
> But anyway, since the scheduler change will effect all archs. and hackbench is only benchmark I found now for this patchset. I'd like to see more testing and talking on this patchset.

Hi Alex,

I tried on my Xeon server (2 x 4 cores) your patchset and got the 
following result:

kernel a5d6e63323fe7799eb0e6  / + patchset

hackbench -T -s 4096 -l 1000 -g 10 -f 40
	  27.604     	     38.556
	  27.397	     38.694
	  26.695	     38.647
	  25.975	     38.528
	  29.586	     38.553
	  25.956	     38.331
	  27.895	     38.472
	  26.874	     38.608
	  26.836	     38.341
	  28.064	     38.626
hackbench -p -s 4096 -l 1000 -g 10 -f 40
	  34.502     	     35.489
	  34.551	     35.389
	  34.027	     35.664
	  34.343	     35.418
	  34.570	     35.423
	  34.386	     35.466
	  34.387	     35.486
	  33.869	     35.212
	  34.600	     35.465
	  34.155	     35.235
hackbench -P -s 4096 -l 1000 -g 10 -f 40
	  39.170     	     38.794
	  39.108	     38.662
	  39.056	     38.946
	  39.120	     38.668
	  38.896	     38.865
	  39.109	     38.803
	  39.020	     38.946
	  39.099	     38.844
	  38.820	     38.872
	  38.923	     39.337



-- 
  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ