lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Sep 2013 13:44:53 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jason Low <jason.low2@...com>
Cc:	mingo@...hat.com, linux-kernel@...r.kernel.org, efault@....de,
	pjt@...gle.com, preeti@...ux.vnet.ibm.com,
	akpm@...ux-foundation.org, mgorman@...e.de, riel@...hat.com,
	aswin@...com, scott.norton@...com, srikar@...ux.vnet.ibm.com
Subject: Re: [RFC][PATCH v4 3/3] sched: Periodically decay max cost of idle
 balance

On Tue, Sep 03, 2013 at 11:02:59PM -0700, Jason Low wrote:
> On Fri, 2013-08-30 at 12:29 +0200, Peter Zijlstra wrote:
> >  	rcu_read_lock();
> >  	for_each_domain(cpu, sd) {
> > +		/*
> > +		 * Decay the newidle max times here because this is a regular
> > +		 * visit to all the domains. Decay ~0.5% per second.
> > +		 */
> > +		if (time_after(jiffies, sd->next_decay_max_lb_cost)) {
> > +			sd->max_newidle_lb_cost =
> > +				(sd->max_newidle_lb_cost * 254) / 256;
> 
> I initially picked 0.5%, but after trying it out, it appears to decay very
> slowing when the max is at a high value. Should we increase the decay a
> little bit more? Maybe something like:
> 
> sd->max_newidle_lb_cost = (sd->max_newidle_lb_cost * 63) / 64;

So the half-life in either case is is given by:

  n = ln(1/2) / ln(x)

which gives 88 seconds for x := 254/256 or 44 seconds for x := 63/64.

I don't really care too much, but obviously something like:

 256*exp(ln(.5)/60) ~= 253

Is attractive ;-)

> > +		/*
> > +		 * Stop the load balance at this level. There is another
> > +		 * CPU in our sched group which is doing load balancing more
> > +		 * actively.
> > +		 */
> > +		if (!continue_balancing) {
> 
> Is "continue_balancing" named "balance" in older kernels?

Yeah, this patch crossed paths with a series remodeling the
load-balancer a bit, that should all be pushed-out to tip/master.

In particular see commit: 
  23f0d20 sched: Factor out code to should_we_balance()

> Here are the AIM7 results with the other 2 patches + this patch with the
> slightly higher decay value.

Just to clarify, 'this patch' is the one I sent?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ