lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD6ShGQtNygfmEaiE4t7AWFh_avobmvnNsoX8nJ9Y91TQ@mail.gmail.com>
Date:	Wed, 17 Dec 2014 09:22:31 +0100
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Morten Rasmussen <morten.rasmussen@....com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	"mingo@...hat.com" <mingo@...hat.com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Paul Turner <pjt@...gle.com>,
	Benjamin Segall <bsegall@...gle.com>,
	Michael Turquette <mturquette@...aro.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>
Subject: Re: [RFC PATCH 09/10] sched: Include blocked utilization in usage tracking

On 2 December 2014 at 15:06, Morten Rasmussen <morten.rasmussen@....com> wrote:
> Add the blocked utilization contribution to group sched_entity
> utilization (se->avg.utilization_avg_contrib) and to get_cpu_usage().
> With this change cpu usage now includes recent usage by currently
> non-runnable tasks, hence it provides a more stable view of the cpu
> usage. It does, however, also mean that the meaning of usage is changed:
> A cpu may be momentarily idle while usage >0. It can no longer be
> assumed that cpu usage >0 implies runnable tasks on the rq.
> cfs_rq->utilization_load_avg or nr_running should be used instead to get
> the current rq status.

if CONFIG_FAIR_GROUP_SCHED is not set, the blocked utilization of idle
CPUs will never be updated and their utilization will stay at last
value just before going idle. So you can have an CPU which became idle
a long time ago but its utilization remains high.

You have to periodically decay and update the blocked utilization of idle CPUs

>
> cc: Ingo Molnar <mingo@...hat.com>
> cc: Peter Zijlstra <peterz@...radead.org>
>
> Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
> ---
>  kernel/sched/fair.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index adf64df..bd950b2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2764,7 +2764,8 @@ static long __update_entity_utilization_avg_contrib(struct sched_entity *se)
>                 __update_task_entity_utilization(se);
>         else
>                 se->avg.utilization_avg_contrib =
> -                                       group_cfs_rq(se)->utilization_load_avg;
> +                               group_cfs_rq(se)->utilization_load_avg +
> +                               group_cfs_rq(se)->utilization_blocked_avg;
>
>         return se->avg.utilization_avg_contrib - old_contrib;
>  }
> @@ -4827,11 +4828,12 @@ static int select_idle_sibling(struct task_struct *p, int target)
>  static int get_cpu_usage(int cpu)
>  {
>         unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg;
> +       unsigned long blocked = cpu_rq(cpu)->cfs.utilization_blocked_avg;
>
> -       if (usage >= SCHED_LOAD_SCALE)
> +       if (usage + blocked >= SCHED_LOAD_SCALE)
>                 return capacity_orig_of(cpu);
>
> -       return usage;
> +       return usage + blocked;
>  }
>
>  /*
> --
> 1.9.1
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ