lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 19 Feb 2013 15:29:07 +0800
From:	Michael Wang <wangyun@...ux.vnet.ibm.com>
To:	Rakib Mullick <rakib.mullick@...il.com>
CC:	Steven Rostedt <rostedt@...dmis.org>,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mike Galbraith <efault@....de>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Clark Williams <clark@...hat.com>,
	Andrew Theurer <habanero@...ibm.com>
Subject: Re: [RFC] sched: The removal of idle_balance()

On 02/19/2013 12:13 PM, Rakib Mullick wrote:
> On Mon, Feb 18, 2013 at 9:25 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
>> On Mon, 2013-02-18 at 13:43 +0530, Srikar Dronamraju wrote:
>>>> The cache misses dropped by ~23% and migrations dropped by ~28%. I
>>>> really believe that the idle_balance() hurts performance, and not just
>>>> for something like hackbench, but the aggressive nature for migration
>>>> that idle_balance() causes takes a large hit on a process' cache.
>>>>
>>>> Think about it some more, just because we go idle isn't enough reason to
>>>> pull a runable task over. CPUs go idle all the time, and tasks are woken
>>>> up all the time. There's no reason that we can't just wait for the sched
>>>> tick to decide its time to do a bit of balancing. Sure, it would be nice
>>>> if the idle CPU did the work. But I think that frame of mind was an
>>>> incorrect notion from back in the early 2000s and does not apply to
>>>> today's hardware, or perhaps it doesn't apply to the (relatively) new
>>>> CFS scheduler. If you want aggressive scheduling, make the task rt, and
>>>> it will do aggressive scheduling.
>>>>
>>>
>>> How is it that the normal tick based load balancing gets it correctly while
>>> the idle_balance gets is wrong?  Can it because of the different
>>> cpu_idle_type?
>>>
>>
>> Currently looks to be a fluke in my box, as this performance increase
>> can't be duplicated elsewhere (yet). But from looking at my traces, it
>> seems that my box does the idle balance at just the wrong time, and
>> causes these issues.
>>
> A default hackbench run creates 400 tasks (10 * 40), on a i7 system (4
> core, HT), idle_balance() shouldn't be in action, cause on a 8 cpu
> system we're assigning 400 tasks. If idle_balance() comes in, that
> means - we've done something wrong while distributing tasks among the
> CPUs, that indicates a problem during fork/exec/wake balancing?

Hmm...I think, unless we have the promise that all those threads, at any
moment, they have the same behaviour, otherwise, even each cpu has the
same load, there are still the chance that some cpu will finish the work
more faster when it own more 'sleepy' tasks at some moment.

So if idle_balance() happen, I will say that the workload is not heavy
enough to keep all the cpu busy all the time, but I won't say it's
imbalanced.

Regards,
Michael Wang

> 
> Thanks,
> Rakib.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ