[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCCX+wue+EupB=0firvaA8mZAFpGwtPOb43BUCNKOfJxg@mail.gmail.com>
Date: Fri, 26 Apr 2013 16:23:10 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
LAK <linux-arm-kernel@...ts.infradead.org>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
Ingo Molnar <mingo@...nel.org>,
Russell King - ARM Linux <linux@....linux.org.uk>,
Paul Turner <pjt@...gle.com>,
Santosh <santosh.shilimkar@...com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Chander Kashyap <chander.kashyap@...aro.org>,
"cmetcalf@...era.com" <cmetcalf@...era.com>,
"tony.luck@...el.com" <tony.luck@...el.com>,
Alex Shi <alex.shi@...el.com>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Len Brown <len.brown@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Amit Kucheria <amit.kucheria@...aro.org>,
Jonathan Corbet <corbet@....net>,
Lukasz Majewski <l.majewski@...sung.com>
Subject: Re: [PATCH 07/14] sched: agressively pack at wake/fork/exec
On 26 April 2013 15:08, Peter Zijlstra <peterz@...radead.org> wrote:
> On Thu, Apr 25, 2013 at 07:23:23PM +0200, Vincent Guittot wrote:
>> According to the packing policy, the scheduler can pack tasks at different
>> step:
>> -SCHED_PACKING_NONE level: we don't pack any task.
>> -SCHED_PACKING_DEFAULT: we only pack small tasks at wake up when system is not
>> busy.
>> -SCHED_PACKING_FULL: we pack tasks at wake up until a CPU becomes full. During
>> a fork or a exec, we assume that the new task is a full running one and we
>> look for an idle CPU close to the buddy CPU.
>
> This changelog is very short on explaining how it will go about achieving these
> goals.
I could move some explanation of the cover letter inside the commit :
In this case, the CPUs pack their tasks in their buddy until they
becomes full. Unlike
the previous step, we can't keep the same buddy so we update it during load
balance. During the periodic load balance, the scheduler computes the activity
of the system thanks the runnable_avg_sum and the cpu_power of all CPUs and
then it defines the CPUs that will be used to handle the current activity. The
selected CPUs will be their own buddy and will participate to the default
load balancing mecanism in order to share the tasks in a fair way, whereas the
not selected CPUs will not, and their buddy will be the last selected CPU.
The behavior can be summarized as: The scheduler defines how many CPUs are
required to handle the current activity, keeps the tasks on these CPUS and
perform normal load balancing
>
>> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
>> ---
>> kernel/sched/fair.c | 47 ++++++++++++++++++++++++++++++++++++++++++-----
>> 1 file changed, 42 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 98166aa..874f330 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3259,13 +3259,16 @@ static struct sched_group *
>> find_idlest_group(struct sched_domain *sd, struct task_struct *p,
>
>
A task that wakes up will be caught by the function check_pack_buddy
in order to stay in the CPUs that participates to the packing effort.
We will use the find_idlest_group only for fork/exec tasks which are
considered as full running tasks so we looks for the idlest CPU close
to the buddy.
> So for packing into power domains, wouldn't you typically pick the busiest non-
> full domain to fill from other non-full?
>
> Picking the idlest non-full seems like it would generate a ping-pong or not
> actually pack anything.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists