lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1376338799.2697.18.camel@j-VirtualBox>
Date:	Mon, 12 Aug 2013 13:19:59 -0700
From:	Jason Low <jason.low2@...com>
To:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Mike Galbraith <efault@....de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Namhyung Kim <namhyung@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kees Cook <keescook@...omium.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	aswin@...com, scott.norton@...com, chegu_vinod@...com,
	"Bui, Tuan" <tuan.d.bui@...com>, Waiman Long <Waiman.Long@...com>,
	"Makphaibulchoke, Thavatchai" <thavatchai.makpahibulchoke@...com>,
	"Bueso, Davidlohr" <davidlohr.bueso@...com>
Subject: Re: [PATCH] sched: Give idle_balance() a break when it does not
 move tasks.

On Mon, 2013-08-12 at 16:30 +0530, Srikar Dronamraju wrote:
> >  	/*
> > @@ -5298,6 +5300,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
> >  			continue;
> > 
> >  		if (sd->flags & SD_BALANCE_NEWIDLE) {
> > +			load_balance_attempted = true;
> > +
> >  			/* If we've pulled tasks over stop searching: */
> >  			pulled_task = load_balance(this_cpu, this_rq,
> >  						   sd, CPU_NEWLY_IDLE, &balance);
> > @@ -5322,6 +5326,10 @@ void idle_balance(int this_cpu, struct rq *this_rq)
> >  		 */
> >  		this_rq->next_balance = next_balance;
> >  	}
> > +
> > +	/* Give idle balance on this CPU a break when it isn't moving tasks */
> > +	if (load_balance_attempted && !pulled_task)
> > +		this_rq->next_newidle_balance = jiffies + (HZ / 100);
> >  }
> 
> Looks reasonable. However should we do this per sd and not per rq. i.e
> move the next_newidle_balance to sched_domain. So if we find a
> load_balance in newly_idle context that wasn't successful, we skip
> load_balance for that sd in the next newly idle balance.

I wonder, if we skip newidle balance for a domain after a newidle
balance attempt for a CPU did not move tasks, would that potentially
cause some "unfairness" for all the other CPUS within the domain?

Perhaps we can reduce the duration that idle balance is blocked from 10
ms to a much smaller duration if we were to block on a per domain basis.

Peter, any thoughts on which method is preferable?

Thanks for the suggestion,
Jason


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ