lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130722070144.GC5138@linux.vnet.ibm.com>
Date:	Mon, 22 Jul 2013 12:31:44 +0530
From:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:	Jason Low <jason.low2@...com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Mike Galbraith <efault@....de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Namhyung Kim <namhyung@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kees Cook <keescook@...omium.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	aswin@...com, scott.norton@...com, chegu_vinod@...com
Subject: Re: [RFC PATCH v2] sched: Limit idle_balance()

> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e8b3350..da2cb3e 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
>  		else
>  			update_avg(&rq->avg_idle, delta);
>  		rq->idle_stamp = 0;
> +
> +		rq->idle_duration = (rq->idle_duration + delta) / 2;

Cant we just use avg_idle instead of introducing idle_duration?

>  	}
>  #endif
>  }
> @@ -7027,6 +7029,7 @@ void __init sched_init(void)
>  		rq->online = 0;
>  		rq->idle_stamp = 0;
>  		rq->avg_idle = 2*sysctl_sched_migration_cost;
> +		rq->idle_duration = 0;
> 
>  		INIT_LIST_HEAD(&rq->cfs_tasks);
> 
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 75024a6..a3f062c 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -307,6 +307,7 @@ do {									\
>  	P(sched_goidle);
>  #ifdef CONFIG_SMP
>  	P64(avg_idle);
> +	P64(idle_duration);
>  #endif
> 
>  	P(ttwu_count);
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c61a614..da7ddd6 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5240,6 +5240,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
>  	struct sched_domain *sd;
>  	int pulled_task = 0;
>  	unsigned long next_balance = jiffies + HZ;
> +	u64 cost = 0;
> +	u64 idle_duration = this_rq->idle_duration;
> 
>  	this_rq->idle_stamp = this_rq->clock;
> 
> @@ -5256,14 +5258,31 @@ void idle_balance(int this_cpu, struct rq *this_rq)
>  	for_each_domain(this_cpu, sd) {
>  		unsigned long interval;
>  		int balance = 1;
> +		u64 this_domain_balance_cost = 0;
> +		u64 start_time;
> 
>  		if (!(sd->flags & SD_LOAD_BALANCE))
>  			continue;
> 
> +		/*
> +		 * If the time which this_cpu remains is not lot higher than the cost
> +		 * of attempt idle balancing within this domain, then stop searching.
> +		 */
> +		if (idle_duration / 10 < (sd->avg_idle_balance_cost + cost))
> +			break;
> +
>  		if (sd->flags & SD_BALANCE_NEWIDLE) {
> +			start_time = sched_clock_cpu(smp_processor_id());
> +
>  			/* If we've pulled tasks over stop searching: */
>  			pulled_task = load_balance(this_cpu, this_rq,
>  						   sd, CPU_NEWLY_IDLE, &balance);
> +
> +			this_domain_balance_cost = sched_clock_cpu(smp_processor_id()) - start_time;

Should we take the consideration of whether a idle_balance was
successful or not?

How about having a per-sched_domain counter. 
For every nth unsuccessful load balance, skip the n+1th idle
balance and reset the counter. Also reset the counter on every
successful idle load balance.

I am not sure whats a reasonable value for n can be, but may be we could
try with n=3.

Also have we checked the performance after adjusting the
sched_migration_cost tunable?

I guess, if we increase the sched_migration_cost, we should have lesser
newly idle balance requests. 

-- 
Thanks and Regards
Srikar Dronamraju

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ