lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1361201142.23152.152.camel@gandalf.local.home>
Date:	Mon, 18 Feb 2013 10:25:42 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mike Galbraith <efault@....de>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Clark Williams <clark@...hat.com>,
	Andrew Theurer <habanero@...ibm.com>
Subject: Re: [RFC] sched: The removal of idle_balance()

On Mon, 2013-02-18 at 13:43 +0530, Srikar Dronamraju wrote:
> > The cache misses dropped by ~23% and migrations dropped by ~28%. I
> > really believe that the idle_balance() hurts performance, and not just
> > for something like hackbench, but the aggressive nature for migration
> > that idle_balance() causes takes a large hit on a process' cache.
> > 
> > Think about it some more, just because we go idle isn't enough reason to
> > pull a runable task over. CPUs go idle all the time, and tasks are woken
> > up all the time. There's no reason that we can't just wait for the sched
> > tick to decide its time to do a bit of balancing. Sure, it would be nice
> > if the idle CPU did the work. But I think that frame of mind was an
> > incorrect notion from back in the early 2000s and does not apply to
> > today's hardware, or perhaps it doesn't apply to the (relatively) new
> > CFS scheduler. If you want aggressive scheduling, make the task rt, and
> > it will do aggressive scheduling.
> > 
> 
> How is it that the normal tick based load balancing gets it correctly while
> the idle_balance gets is wrong?  Can it because of the different
> cpu_idle_type?
> 

Currently looks to be a fluke in my box, as this performance increase
can't be duplicated elsewhere (yet). But from looking at my traces, it
seems that my box does the idle balance at just the wrong time, and
causes these issues.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ