lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50E92DC3.4050906@intel.com>
Date:	Sun, 06 Jan 2013 15:54:43 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Alex Shi <alex.shi@...el.com>, pjt@...gle.com, mingo@...hat.com
CC:	peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
	namhyung@...nel.org, efault@....de, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v3 09/22] sched: compute runnable load avg in cpu_load
 and cpu_avg_load_per_task


>>  static unsigned long weighted_cpuload(const int cpu)
>>  {
>> -	return cpu_rq(cpu)->load.weight;
>> +	return (unsigned long)cpu_rq(cpu)->cfs.runnable_load_avg;
> 
> Above line change cause aim9 multitask benchmark drop about 10%
> performance on many x86 machines. Profile just show there are more
> cpuidle enter called.
> The testing command:
> 
> #( echo $hostname ; echo test ; echo 1 ; echo 2000 ; echo 2 ; echo 2000
> ; echo 100 ) | ./multitask -nl
> 
> The oprofile output here:
> with this patch set
> 101978 total                                      0.0134
>  54406 cpuidle_wrap_enter                       499.1376
>   2098 __do_page_fault                            2.0349
>   1976 rwsem_wake                                29.0588
>   1824 finish_task_switch                        12.4932
>   1560 copy_user_generic_string                  24.3750
>   1346 clear_page_c                              84.1250
>   1249 unmap_single_vma                           0.6885
>   1141 copy_page_rep                             71.3125
>   1093 anon_vma_interval_tree_insert              8.1567
> 
> 3.8-rc2
>  68982 total                                      0.0090
>  22166 cpuidle_wrap_enter                       203.3578
>   2188 rwsem_wake                                32.1765
>   2136 __do_page_fault                            2.0718
>   1920 finish_task_switch                        13.1507
>   1724 poll_idle                                 15.2566
>   1433 copy_user_generic_string                  22.3906
>   1237 clear_page_c                              77.3125
>   1222 unmap_single_vma                           0.6736
>   1053 anon_vma_interval_tree_insert              7.8582
> 
> Without load avg in periodic balancing, each cpu will weighted with all
> tasks load.
> 
> with new load tracking, we just update the cfs_rq load avg with each
> task at enqueue/dequeue moment, and with just update current task in
> scheduler_tick. I am wondering if it's the sample is a bit rare.
> 
> What's your opinion of this, Paul?
> 

Ingo & Paul:

I just looked into the aim9 benchmark, in this case it forks 2000 tasks,
after all tasks ready, aim9 give a signal than all tasks burst waking up
and run until all finished.
Since each of tasks are finished very quickly, a imbalanced empty cpu
may goes to sleep till a regular balancing give it some new tasks. That
causes the performance dropping. cause more idle entering.

According to load avg's design, it needs time to accumulate its load
weight. So, it's hard to find a way resolving this problem.

As to other scheduler related benchmarks, like kbuild, specjbb2005,
hachbench, sysbench etc, I didn't find clear improvement or regression
on the load avg balancing.

Any comments for this problem?

-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ