lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 04 May 2010 15:14:24 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Dominik Brodowski <linux@...inikbrodowski.net>
Cc:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	Arjan van de Ven <arjan@...ux.intel.com>,
	linux-kernel@...r.kernel.org, David Miller <davem@...emloft.net>,
	"suresh.b.siddha" <suresh.b.siddha@...el.com>
Subject: Re: [RFC PATCH v2] nohz/sched: disable ilb on !mc_capable()

On Mon, 2010-04-26 at 22:31 +0200, Dominik Brodowski wrote:
> From: Dominik Brodowski <linux@...inikbrodowski.net>
> Date: Thu, 8 Apr 2010 21:51:18 +0200
> Subject: [PATCH] nohz/sched: disable ilb on !mc_capable()
> 
> On my dual-core, !mc_capbale() CPU, the idle load balancer (ilb) is one
> of the main reasons ticks are not stopped: Under moderate load (~98 % idle),
> upt o half of the calls to tick_nohz_top_sched_tick() are aborted due
> to calls to select_nohz_load_balancer(1).
> 
> I suspect this is caused by the following phenomenon:
> 
>     CPU0				CPU1
>     <active>				<active>
>     tick_nohz_stop_sched_tick(1)
>     select_nohz_load_balancer(1)
>      => CPU0 becomes ilb owner,		<CPU1 becomes idle a bit later>
>         tick is not stopped,		tick_nohz_stop_sched_tick(1)
>         CPU0 goes to sleep for		 => CPU1 isn't the ilb owner,
>         exactly 1 tick.			    tick is stopped.
>     <short sleep>			<long sleep>
>     ---> scheduler_tick()
>     tick_nohz_stop_sched_tick(0)
>     tick_nohz_stop_sched_tick(1)
>      => is ilb owner, all CPUs are
>         idle, CPU0 may go to sleep.
> 
> If all CPU cores have hardly anything to do, letting the active CPU do
> idle load balancing allows us to enter deep sleep states earlier, and for
> longer periods of time. Furthermore, on !mc_capable() systems, it seems that
> the ilb algorithm isn't needed at all. Let's show this for a 2-core system:
> 
> - if both cores are active, ilb is deactivated
> - if no core is active, ilb is deactivated
> - if only one core is active, it attempts to balance its load off to other
>   CPUs on each tick anyway. ilb wouldn't act quicker.
> 
> This patch decreases the amount of wakeups on my completely idle notebook by
> about two thirds.

Right, so I think the !mc_capable() check is buggy, at the very least on
sparc64 which is 'creative' with its sched_domain maps.

I'm also not sure what a single socket AMD Magny-Cours will do.

On a single socket Nehalem we will have a non trivial sched_domain
because we also have the threads included.

I think we can only do your optimization for machines that end up having
a single sched_domain that covers the entire machine.

> Signed-off-by: Dominik Brodowski <linux@...inikbrodowski.net>
> 
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 5a5ea2c..8ad8a03 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -3290,6 +3290,9 @@ int select_nohz_load_balancer(int stop_tick)
>  	if (stop_tick) {
>  		cpu_rq(cpu)->in_nohz_recently = 1;
>  
> +		if (!mc_capable())
> +			return 0;
> +
>  		if (!cpu_active(cpu)) {
>  			if (atomic_read(&nohz.load_balancer) != cpu)
>  				return 0;
> @@ -3339,6 +3342,9 @@ int select_nohz_load_balancer(int stop_tick)
>  		if (!cpumask_test_cpu(cpu, nohz.cpu_mask))
>  			return 0;
>  
> +		if (!mc_capable())
> +			return 0;
> +
>  		cpumask_clear_cpu(cpu, nohz.cpu_mask);
>  
>  		if (atomic_read(&nohz.load_balancer) == cpu)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ