lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51014E34.60309@intel.com>
Date:	Thu, 24 Jan 2013 23:07:32 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Borislav Petkov <bp@...en8.de>, torvalds@...ux-foundation.org,
	mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
	namhyung@...nel.org, efault@....de, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
	viresh.kumar@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [patch v4 0/18] sched: simplified fork, release load avg and
 power awareness scheduling

On 01/24/2013 05:44 PM, Borislav Petkov wrote:
> On Thu, Jan 24, 2013 at 11:06:42AM +0800, Alex Shi wrote:
>> Since the runnable info needs 345ms to accumulate, balancing
>> doesn't do well for many tasks burst waking. After talking with Mike
>> Galbraith, we are agree to just use runnable avg in power friendly 
>> scheduling and keep current instant load in performance scheduling for 
>> low latency.
>>
>> So the biggest change in this version is removing runnable load avg in
>> balance and just using runnable data in power balance.
>>
>> The patchset bases on Linus' tree, includes 3 parts,
>> ** 1, bug fix and fork/wake balancing clean up. patch 1~5,
>> ----------------------
>> the first patch remove one domain level. patch 2~5 simplified fork/wake
>> balancing, it can increase 10+% hackbench performance on our 4 sockets
>> SNB EP machine.
> 
> Ok, I see some benchmarking results here and there in the commit
> messages but since this is touching the scheduler, you probably would
> need to make sure it doesn't introduce performance regressions vs
> mainline with a comprehensive set of benchmarks.
> 

Thanks a lot for your comments, Borislav! :)

For this patchset, the code will just check current policy, if it is
performance, the code patch will back to original performance code at
once. So there should no performance change on performance policy.

I once tested the balance policy performance with benchmark
kbuild/hackbench/aim9/dbench/tbench on version 2, only hackbench has a
bit drop ~3%. others have no clear change.

> And, AFAICR, mainline does by default the 'performance' scheme by
> spreading out tasks to idle cores, so have you tried comparing vanilla
> mainline to your patchset in the 'performance' setting so that you can
> make sure there are no problems there? And not only hackbench or a
> microbenchmark but aim9 (I saw that in a commit message somewhere) and
> whatever else multithreaded benchmark you can get your hands on.
> 
> Also, you might want to run it on other machines too, not only SNB :-)

Anyway I will redo the performance testing on this version again on all
machine. but doesn't expect something change. :)

> And what about ARM, maybe someone there can run your patchset too?
> 
> So, it would be cool to see comprehensive results from all those runs
> and see what the numbers say.
> 
> Thanks.
> 


-- 
Thanks
    Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ