lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 1 Jul 2013 16:49:22 +0800
From:	Lei Wen <adrian.wenl@...il.com>
To:	Alex Shi <alex.shi@...el.com>
Cc:	Lei Wen <leiwen@...vell.com>, Paul Turner <pjt@...gle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>, mingo@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] sched: add trace event for per-entity tracking

Alex,

On Mon, Jul 1, 2013 at 4:06 PM, Alex Shi <alex.shi@...el.com> wrote:
> On 07/01/2013 03:10 PM, Lei Wen wrote:
>> Thanks for the per-entity tracking feature, we could know the details of
>> each task by its help.
>> This patch add its trace support, so that we could quickly know the system
>> status in a large time scale, like now we may get each runqueue's usage ratio by:
>>
>> cfs_rq's usage ratio = cfs_rq->runnable_load_avg/cfs_rq->load.weight
>>
>
> the direct usage ratio is rq.avg.runnable_avg_sum / rq.avg.runnable_avg_period.


>From the parsed data diagram, seem more pretty than my previous use
load as the calculation one. :)
BTW, do you think there is some meaning for doing below calculation?
cfs_rq->runnable_load_avg/cfs_rq->load.weight

I think by this calculation from the
runnable_avg_load/blocked_avg_load trace result,
we may catch some abnormal load distribution when debugging.


>
> one patch from obsolete power-scheduling could be reference for this:
> git@...hub.com:alexshi/power-scheduling.git power-scheduling
>
> From 081cd4bcbccfaa1930b031e4dfbf9d23b8c0d5ab Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@...el.com>
> Date: Fri, 7 Dec 2012 21:37:58 +0800
> Subject: [PATCH 02/23] sched: log the cpu utilization at rq
>
> The cpu's utilization is to measure how busy is the cpu.
>         util = cpu_rq(cpu)->avg.runnable_avg_sum * SCHED_POEWR_SCALE
>                 / cpu_rq(cpu)->avg.runnable_avg_period;
>
> Since the util is no more than 1, we scale its value with 1024, same as
> SCHED_POWER_SCALE and set the FULL_UTIL as 1024.
>
> In later power aware scheduling, we are sensitive for how busy of the
> cpu. Since as to power consuming, it is tight related with cpu busy
> time.
>
> BTW, rq->util can be used for any purposes if needed, not only power
> scheduling.
>
> Signed-off-by: Alex Shi <alex.shi@...el.com>


Nice patch, would it be merged? :)

Thanks,
Lei
> ---
>  include/linux/sched.h | 2 +-
>  kernel/sched/debug.c  | 1 +
>  kernel/sched/fair.c   | 5 +++++
>  kernel/sched/sched.h  | 4 ++++
>  4 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 9539597..4e4d9ee 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -794,7 +794,7 @@ enum cpu_idle_type {
>  #define SCHED_LOAD_SCALE       (1L << SCHED_LOAD_SHIFT)
>
>  /*
> - * Increase resolution of cpu_power calculations
> + * Increase resolution of cpu_power and rq->util calculations
>   */
>  #define SCHED_POWER_SHIFT      10
>  #define SCHED_POWER_SCALE      (1L << SCHED_POWER_SHIFT)
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 75024a6..f5db759 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -311,6 +311,7 @@ do {                                                                        \
>
>         P(ttwu_count);
>         P(ttwu_local);
> +       P(util);
>
>  #undef P
>  #undef P64
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2e49c3f..7124244 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1495,8 +1495,13 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
>
>  static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
>  {
> +       u32 period;
>         __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable);
>         __update_tg_runnable_avg(&rq->avg, &rq->cfs);
> +
> +       period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
> +       rq->util = (u64)(rq->avg.runnable_avg_sum << SCHED_POWER_SHIFT)
> +                               / period;
>  }
>
>  /* Add the load generated by se into cfs_rq's child load-average */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 804ee41..8682110 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -351,6 +351,9 @@ extern struct root_domain def_root_domain;
>
>  #endif /* CONFIG_SMP */
>
> +/* full cpu utilization */
> +#define FULL_UTIL      SCHED_POWER_SCALE
> +
>  /*
>   * This is the main, per-CPU runqueue data structure.
>   *
> @@ -482,6 +485,7 @@ struct rq {
>  #endif
>
>         struct sched_avg avg;
> +       unsigned int util;
>  };
>
>  static inline int cpu_of(struct rq *rq)
> --
> 1.7.12
>
> --
> Thanks
>     Alex
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ