lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDK6vSDhGa9=ReaHt_oO8Xz2agZZ4x3q5Wvv6gXjhFevg@mail.gmail.com>
Date:	Wed, 27 Mar 2013 09:05:18 +0100
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Alex Shi <alex.shi@...el.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	linaro-kernel@...ts.linaro.org, mingo@...nel.org,
	linux@....linux.org.uk, pjt@...gle.com, santosh.shilimkar@...com,
	morten.rasmussen@....com, chander.kashyap@...aro.org,
	cmetcalf@...era.com, tony.luck@...el.com,
	preeti@...ux.vnet.ibm.com, paulmck@...ux.vnet.ibm.com,
	tglx@...utronix.de, len.brown@...el.com, arjan@...ux.intel.com,
	amit.kucheria@...aro.org, corbet@....net
Subject: Re: [RFC PATCH v3 5/6] sched: pack the idle load balance

On 27 March 2013 05:56, Alex Shi <alex.shi@...el.com> wrote:
> On 03/26/2013 11:55 PM, Vincent Guittot wrote:
>>> > So extrapolating that to a 4+4 big-little you'd get something like:
>>> >
>>> >       |   little  A9  ||   big A15     |
>>> >       | 0 | 1 | 2 | 3 || 4 | 5 | 6 | 7 |
>>> > ------+---+---+---+---++---+---+---+---+
>>> > buddy | 0 | 0 | 0 | 0 || 0 | 4 | 4 | 4 |
>>> >
>>> > Right?
>> yes
>>
>>> >
>>> > So supposing the current ILB is 6, we'll only check 4, not 0-3, even
>>> > though there might be a perfectly idle cpu in there.
>> We will check 4,5,7 at MC level in order to pack in the group of A15
>> (because they are not sharing the same power domain). If none of them
>> are idle, we will look at CPU level and will check CPUs 0-3.
>
> So you increase a fixed step here.

I have modified the find_new_ilb function to look for the best idle
CPU instead of just picking the first CPU of idle_cpus_mask.

>>
>>> >
>>> > Also, your scheme fails to pack when cpus 0,4 are filled, even when
>>> > there's idle cores around.
>> The primary target is to pack the tasks only when we are in a not busy
>> system so you will have a power improvement without performance
>> decrease. is_light_task function returns false and  is_buddy_busy
>> function true before the buddy is fully loaded and the scheduler will
>> fall back into the default behavior which spreads tasks and races to
>> idle.
>>
>> We can extend the buddy CPU and the packing mechanism to fill one CPU
>> before filling another buddy but it's not always the best choice for
>> performance and/or power and thus it will imply to have a knob to
>> select this full packing mode.
>
> Just one buddy to pack tasks for whole level cpus definitely has
> scalability problem. That is not good for powersaving in most of scenarios.
>

This patch doesn't want to pack all kind of tasks in all scenario but
only the small tasks that run less that 10ms and when the CPU is not
already too busy with other tasks so you don't have to cope with long
wake up latency and performance regression and only one CPU will be
powered up for these background activities. Nevertheless, I can extend
the packing small tasks to pack all tasks in any scenario in as few
CPUs as possible. This will imply to choose a new buddy CPU when the
previous one is full during the ILB selection as an example and to add
a knob to select this mode which will modify the performance of the
system. But the primary target is not to have a knob and not to reduce
performance in most of scenario.

Regards,
Vincent

>
> --
> Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ