lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26oatnk55g.fsf@sword-of-the-dawn.mtv.corp.google.com>
Date:	Tue, 07 Oct 2014 13:15:39 -0700
From:	bsegall@...gle.com
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org, preeti@...ux.vnet.ibm.com,
	Morten.Rasmussen@....com, kamalesh@...ux.vnet.ibm.com,
	linux@....linux.org.uk, linux-arm-kernel@...ts.infradead.org,
	riel@...hat.com, efault@....de, nicolas.pitre@...aro.org,
	linaro-kernel@...ts.linaro.org, daniel.lezcano@...aro.org,
	dietmar.eggemann@....com, pjt@...gle.com
Subject: Re: [PATCH 4/7] sched: Track group sched_entity usage contributions

Vincent Guittot <vincent.guittot@...aro.org> writes:

> From: Morten Rasmussen <morten.rasmussen@....com>
>
> Adds usage contribution tracking for group entities. Unlike
> se->avg.load_avg_contrib, se->avg.utilization_avg_contrib for group
> entities is the sum of se->avg.utilization_avg_contrib for all entities on the
> group runqueue. It is _not_ influenced in any way by the task group
> h_load. Hence it is representing the actual cpu usage of the group, not
> its intended load contribution which may differ significantly from the
> utilization on lightly utilized systems.


Just noting that this version also has usage disappear immediately when
a task blocks, although it does what you probably want on migration.

Also, group-ses don't ever use their running_avg_sum so it's kinda a
waste, but I'm not sure it's worth doing anything about.

>
> cc: Paul Turner <pjt@...gle.com>
> cc: Ben Segall <bsegall@...gle.com>
>
> Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
> ---
>  kernel/sched/debug.c | 3 +++
>  kernel/sched/fair.c  | 5 +++++
>  2 files changed, 8 insertions(+)
>
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index e0fbc0f..efb47ed 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -94,8 +94,10 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
>  	P(se->load.weight);
>  #ifdef CONFIG_SMP
>  	P(se->avg.runnable_avg_sum);
> +	P(se->avg.running_avg_sum);
>  	P(se->avg.avg_period);
>  	P(se->avg.load_avg_contrib);
> +	P(se->avg.utilization_avg_contrib);
>  	P(se->avg.decay_count);
>  #endif
>  #undef PN
> @@ -633,6 +635,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
>  	P(se.avg.running_avg_sum);
>  	P(se.avg.avg_period);
>  	P(se.avg.load_avg_contrib);
> +	P(se.avg.utilization_avg_contrib);
>  	P(se.avg.decay_count);
>  #endif
>  	P(policy);
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d6de526..d3e9067 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2381,6 +2381,8 @@ static inline u64 __synchronize_entity_decay(struct sched_entity *se)
>  		return 0;
>  
>  	se->avg.load_avg_contrib = decay_load(se->avg.load_avg_contrib, decays);
> +	se->avg.utilization_avg_contrib =
> +			decay_load(se->avg.utilization_avg_contrib, decays);
>  	se->avg.decay_count = 0;
>  
>  	return decays;
> @@ -2525,6 +2527,9 @@ static long __update_entity_utilization_avg_contrib(struct sched_entity *se)
>  
>  	if (entity_is_task(se))
>  		__update_task_entity_utilization(se);
> +	else
> +		se->avg.utilization_avg_contrib =
> +					group_cfs_rq(se)->utilization_load_avg;
>  
>  	return se->avg.utilization_avg_contrib - old_contrib;
>  }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ