[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130110114044.GE2046@e103034-lin>
Date: Thu, 10 Jan 2013 11:40:44 +0000
From: Morten Rasmussen <Morten.Rasmussen@....com>
To: Alex Shi <alex.shi@...el.com>
Cc: "mingo@...hat.com" <mingo@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"bp@...en8.de" <bp@...en8.de>, "pjt@...gle.com" <pjt@...gle.com>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"efault@....de" <efault@....de>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 15/22] sched: log the cpu utilization at rq
On Sat, Jan 05, 2013 at 08:37:44AM +0000, Alex Shi wrote:
> The cpu's utilization is to measure how busy is the cpu.
> util = cpu_rq(cpu)->avg.runnable_avg_sum
> / cpu_rq(cpu)->avg.runnable_avg_period;
>
> Since the util is no more than 1, we use its percentage value in later
> caculations. And set the the FULL_UTIL as 99%.
>
> In later power aware scheduling, we are sensitive for how busy of the
> cpu, not how weight of its load. As to power consuming, it is more
> related with busy time, not the load weight.
>
> Signed-off-by: Alex Shi <alex.shi@...el.com>
> ---
> kernel/sched/debug.c | 1 +
> kernel/sched/fair.c | 4 ++++
> kernel/sched/sched.h | 4 ++++
> 3 files changed, 9 insertions(+)
>
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 2cd3c1b..e4035f7 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -318,6 +318,7 @@ do { \
>
> P(ttwu_count);
> P(ttwu_local);
> + P(util);
>
> #undef P
> #undef P64
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ee015b8..7bfbd69 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
>
> static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
> {
> + u32 period;
> __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable);
> __update_tg_runnable_avg(&rq->avg, &rq->cfs);
> +
> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
> + rq->util = rq->avg.runnable_avg_sum * 100 / period;
The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
both holds
rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
NICE_0_LOAD (1024). Why not use one of the existing variables instead of
introducing a new one?
Morten
> }
>
> /* Add the load generated by se into cfs_rq's child load-average */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 66b08a1..3c6e803 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
>
> #endif /* CONFIG_SMP */
>
> +/* Take as full load, if the cpu percentage util is up to 99 */
> +#define FULL_UTIL 99
> +
> /*
> * This is the main, per-CPU runqueue data structure.
> *
> @@ -481,6 +484,7 @@ struct rq {
> #endif
>
> struct sched_avg avg;
> + unsigned int util;
> };
>
> static inline int cpu_of(struct rq *rq)
> --
> 1.7.12
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists