lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 03 Apr 2013 15:18:51 +0800
From:	Michael Wang <wangyun@...ux.vnet.ibm.com>
To:	Alex Shi <alex.shi@...el.com>
CC:	mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
	pjt@...gle.com, namhyung@...nel.org, efault@....de,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
	viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
	len.brown@...el.com, rafael.j.wysocki@...el.com, jkosina@...e.cz,
	clark.williams@...il.com, tony.luck@...el.com,
	keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v3 0/8] sched: use runnable avg in load balance

On 04/03/2013 02:53 PM, Alex Shi wrote:
> On 04/03/2013 02:22 PM, Michael Wang wrote:
>>>>
>>>> If many tasks sleep long time, their runnable load are zero. And if they
>>>> are waked up bursty, too light runnable load causes big imbalance among
>>>> CPU. So such benchmark, like aim9 drop 5~7%.
>>>>
>>>> With this patch the losing is covered, and even is slight better.
>> A fast test show the improvement disappear and the regression back
>> again...after applied this one as the 8th patch, it doesn't works.
> 
> It always is good for on benchmark and bad for another. :)

That's right :)

> 
> the following patch include the renamed knob, and you can tune it under 
> /proc/sys/kernel/... to see detailed impact degree.

Could I make the conclusion that the improvement on pgbench was caused
by the new weighted_cpuload()?

The burst branch seems to just adopt the load in old world, if reduce
the rate to enter that branch could regain the benefit, then I could
confirm my supposition.

> 
> +	if (cpu_rq(this_cpu)->avg_idle < sysctl_sched_migration_cost ||
> +		cpu_rq(prev_cpu)->avg_idle < sysctl_sched_migration_cost)

It should be 'sysctl_sched_burst_threshold' here, isn't it? anyway, I
will take a try with different rate.

Regards,
Michael Wang


> +		burst= 1;
> +
> +	/* use instant load for bursty waking up */
> +	if (!burst) {
> +		load = source_load(prev_cpu, idx);
> +		this_load = target_load(this_cpu, idx);
> +	} else {
> +		load = cpu_rq(prev_cpu)->load.weight;
> +		this_load = cpu_rq(this_cpu)->load.weight;
> +	}
> 
>  	/*
>  	 * If sync wakeup then subtract the (maximum possible)
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index afc1dc6..1f23457 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -327,6 +327,13 @@ static struct ctl_table kern_table[] = {
>  		.proc_handler	= proc_dointvec,
>  	},
>  	{
> +		.procname	= "sched_burst_threshold_ns",
> +		.data		= &sysctl_sched_burst_threshold,
> +		.maxlen		= sizeof(unsigned int),
> +		.mode		= 0644,
> +		.proc_handler	= proc_dointvec,
> +	},
> +	{
>  		.procname	= "sched_nr_migrate",
>  		.data		= &sysctl_sched_nr_migrate,
>  		.maxlen		= sizeof(unsigned int),
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ