[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090319163853.GL2990@dirshya.in.ibm.com>
Date: Thu, 19 Mar 2009 22:08:53 +0530
From: Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
To: Gautham R Shenoy <ego@...ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Suresh Siddha <suresh.b.siddha@...el.com>,
Balbir Singh <balbir@...ibm.com>
Subject: Re: [PATCH 3 3/6] sched: Add Comments at the beginning of
find_busiest_group.
* Gautham R Shenoy <ego@...ibm.com> [2009-03-18 14:52:33]:
> Currently there are no comments pertaining to power-savings balance in
> the function find_busiest_group. Add appropriate comments.
>
> Signed-off-by: Gautham R Shenoy <ego@...ibm.com>
> Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> Cc: Ingo Molnar <mingo@...e.hu>
> ---
>
> kernel/sched.c | 17 +++++++++++++++++
> 1 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 407ee03..864c6ca 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -3090,6 +3090,23 @@ static int move_one_task(struct rq *this_rq, int this_cpu, struct rq *busiest,
> * find_busiest_group finds and returns the busiest CPU group within the
> * domain. It calculates and returns the amount of weighted load which
> * should be moved to restore balance via the imbalance parameter.
> + *
> + * Power-savings-balance: Through the sysfs tunables sched_mc/smt_power_savings
> + * he user can opt for aggressive task consolidation as a means to save power.
^ the
> + * When this is activated, we would have the SD_POWERSAVINGS_BALANCE flag
^^^^ When sched_{mc,smt}_powersavings is activated, then
SD_POWERSAVINGS_BALANCE...
> + * set for appropriate sched_domains,
> + *
> + * Within such sched_domains, find_busiest_group would try to identify
> + * a sched_group which can be freed-up and whose tasks can be migrated to
> + * a sibling group which has the capacity to accomodate the former's tasks.
^^^ remaining capacity
> + * If such a "can-go-idle" sched_group does exist, then the sibling group
^^^^^^^^
this group is returned
as busiest_group
> + * which can accomodate it's tasks is returned as the busiest group.
^^^^^^^^^^^^^^^^ This is the
group_leader and busiest_group is
returned if the current cpu is
a member of group leader.
> + *
> + * Furthermore, if the user opts for more aggressive power-aware load
> + * balancing through sched_smt/mc_power_savings = 2, i.e when the
> + * active_power_savings_level greater or equal to POWERSAVINGS_BALANCE_WAKEUP,
> + * find_busiest_group will also nominate the preferred CPU, on which the tasks
> + * should hence forth be woken up on, instead of bothering an idle-cpu.
> */
> static struct sched_group *
> find_busiest_group(struct sched_domain *sd, int this_cpu,
Acked-by: Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists