lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1374174399.1792.42.camel@j-VirtualBox>
Date:	Thu, 18 Jul 2013 12:06:39 -0700
From:	Jason Low <jason.low2@...com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Mike Galbraith <efault@....de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Namhyung Kim <namhyung@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kees Cook <keescook@...omium.org>,
	Mel Gorman <mgorman@...e.de>, aswin@...com,
	scott.norton@...com, chegu_vinod@...com
Subject: Re: [RFC] sched: Limit idle_balance() when it is being used too
 frequently

On Thu, 2013-07-18 at 07:59 -0400, Rik van Riel wrote:
> On 07/18/2013 05:32 AM, Peter Zijlstra wrote:
> > On Wed, Jul 17, 2013 at 09:02:24PM -0700, Jason Low wrote:
> >
> >> I ran a few AIM7 workloads for the 8 socket HT enabled case and I needed
> >> to set N to more than 20 in order to get the big performance gains.
> >>
> >> One thing that I thought of was to have N be based on how often idle
> >> balance attempts does not pull task(s).
> >>
> >> For example, N can be calculated based on the number of idle balance
> >> attempts for the CPU  since the last "successful" idle balance attempt.
> >> So if the previous 30 idle balance attempts resulted in no tasks moved,
> >> then n = 30 / 5. So idle balance gets less time to run as the number of
> >> unneeded idle balance attempts increases, and thus N will not be set too
> >> high during situations where idle balancing is "successful" more often.
> >> Any comments on this idea?
> >
> > It would be good to get a solid explanation for why we need such high N.
> > But yes that might work.
> 
> I have some idea, though no proof :)
> 
> I suspect a lot of the idle balancing time is spent waiting for
> and acquiring the runqueue locks of remote CPUs.
> 
> If we spend half our idle time causing contention to remote
> runqueue locks, we could be a big factor in keeping those other
> CPUs from getting work done.

I collected some perf samples when running fserver when N=1 and N=60.

N = 1
-----
19.21%  reaim  [k] __read_lock_failed                     
14.79%  reaim  [k] mspin_lock                             
12.19%  reaim  [k] __write_lock_failed                    
7.87%   reaim  [k] _raw_spin_lock                          
2.03%   reaim  [k] start_this_handle                       
1.98%   reaim  [k] update_sd_lb_stats                      
1.92%   reaim  [k] mutex_spin_on_owner                     
1.86%   reaim  [k] update_cfs_rq_blocked_load              
1.14%   swapper  [k] intel_idle                              
1.10%   reaim  [.] add_long                                
1.09%   reaim  [.] add_int                                 
1.08%   reaim  [k] load_balance                            

N = 60
------
7.70%  reaim  [k] _raw_spin_lock                             
7.25%  reaim  [k] mspin_lock                                 
6.30%  reaim  [.] add_long                                   
6.26%  reaim  [.] add_int                                    
4.05%  reaim  [.] strncat                                    
3.81%  reaim  [.] string_rtns_1                              
3.66%  reaim  [.] div_long                                   
3.44%  reaim  [k] mutex_spin_on_owner                        
2.91%  reaim  [.] add_short                                  
2.73%  swapper  [k] intel_idle                                 
2.65%  reaim  [k] __read_lock_failed

With idle_balance(), we get more contention in kernel functions such as
update_sd_lb_stats(), load_balance(), and spin_lock() for the rq lock.
Additionally, it increases the time spent in mutex's mspin_lock(),
__read_lock_failed() and __write_lock_failed() by a lot.

N needs to be large because avg_idle time is still a lot higher than the
avg time spent in each load_balance() call per sched domain. Despite the
high ratio of avg_idle time to time spent in load_balance(),
load_balance() still increases the time spent in the kernel by quite a
bit, probably because of how often it is being used.

Jason


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ