lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Mar 2013 16:47:17 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Vincent Guittot <vincent.guittot@...aro.org>
CC:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	linaro-kernel@...ts.linaro.org, mingo@...nel.org,
	linux@....linux.org.uk, pjt@...gle.com, santosh.shilimkar@...com,
	morten.rasmussen@....com, chander.kashyap@...aro.org,
	cmetcalf@...era.com, tony.luck@...el.com,
	preeti@...ux.vnet.ibm.com, paulmck@...ux.vnet.ibm.com,
	tglx@...utronix.de, len.brown@...el.com, arjan@...ux.intel.com,
	amit.kucheria@...aro.org, corbet@....net
Subject: Re: [RFC PATCH v3 5/6] sched: pack the idle load balance


>>>>> So supposing the current ILB is 6, we'll only check 4, not 0-3, even
>>>>> though there might be a perfectly idle cpu in there.
>>> We will check 4,5,7 at MC level in order to pack in the group of A15
>>> (because they are not sharing the same power domain). If none of them
>>> are idle, we will look at CPU level and will check CPUs 0-3.
>>
>> So you increase a fixed step here.
> 
> I have modified the find_new_ilb function to look for the best idle
> CPU instead of just picking the first CPU of idle_cpus_mask.

That's better.
But using a fixed buddy is still not flexible, and involve more checking
in this time critical balancing.
Consider the most of SMP system, cpu is equal, so any of other cpu can
play the role of buddy in your design. That means no buddy cpu is
better, like my version packing.

> 
>>>
>>>>>
>>>>> Also, your scheme fails to pack when cpus 0,4 are filled, even when
>>>>> there's idle cores around.
>>> The primary target is to pack the tasks only when we are in a not busy
>>> system so you will have a power improvement without performance
>>> decrease. is_light_task function returns false and  is_buddy_busy
>>> function true before the buddy is fully loaded and the scheduler will
>>> fall back into the default behavior which spreads tasks and races to
>>> idle.
>>>
>>> We can extend the buddy CPU and the packing mechanism to fill one CPU
>>> before filling another buddy but it's not always the best choice for
>>> performance and/or power and thus it will imply to have a knob to
>>> select this full packing mode.
>>
>> Just one buddy to pack tasks for whole level cpus definitely has
>> scalability problem. That is not good for powersaving in most of scenarios.
>>
> 
> This patch doesn't want to pack all kind of tasks in all scenario but
> only the small tasks that run less that 10ms and when the CPU is not
> already too busy with other tasks so you don't have to cope with long
> wake up latency and performance regression and only one CPU will be
> powered up for these background activities. Nevertheless, I can extend
> the packing small tasks to pack all tasks in any scenario in as few
> CPUs as possible. This will imply to choose a new buddy CPU when the
> previous one is full during the ILB selection as an example and to add
> a knob to select this mode which will modify the performance of the
> system. But the primary target is not to have a knob and not to reduce
> performance in most of scenario.

Arguing the performance/power balance does no much sense without
detailed scenario. We just want to seek a flexible compromise way.
But fixed buddy cpu is not flexible. and it may lose many possible
powersaving fit scenarios on x86 system. Like if 2 SMT cpu can handle
all tasks, we don't need to wake another core. or if 2 cores in one
socket can handle tasks, we also don't need to wakeup another socket.
> 
> Regards,
> Vincent
> 
>>
>> --
>> Thanks Alex


-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ