[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191031094741.GB19197@e108754-lin>
Date: Thu, 31 Oct 2019 09:47:41 +0000
From: Ionela Voinescu <ionela.voinescu@....com>
To: Thara Gopinath <thara.gopinath@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
rui.zhang@...el.com, edubezval@...il.com, qperret@...gle.com,
linux-kernel@...r.kernel.org, amit.kachhap@...il.com,
javi.merino@...nel.org, daniel.lezcano@...aro.org
Subject: Re: [Patch v4 1/6] sched/pelt.c: Add support to track thermal
pressure
On Tuesday 22 Oct 2019 at 16:34:20 (-0400), Thara Gopinath wrote:
> Extrapolating on the exisiting framework to track rt/dl utilization using
> pelt signals, add a similar mechanism to track thermal pressure. The
> difference here from rt/dl utilization tracking is that, instead of
> tracking time spent by a cpu running a rt/dl task through util_avg,
> the average thermal pressure is tracked through load_avg. This is
> because thermal pressure signal is weighted "delta" capacity
> and is not binary(util_avg is binary). "delta capacity" here
> means delta between the actual capacity of a cpu and the decreased
> capacity a cpu due to a thermal event.
> In order to track average thermal pressure, a new sched_avg variable
> avg_thermal is introduced. Function update_thermal_avg can be called
Nit: s/update_thermal_avg/update_thermal_load_avg
> to do the periodic bookeeping (accumulate, decay and average)
> of the thermal pressure.
>
> Signed-off-by: Thara Gopinath <thara.gopinath@...aro.org>
> ---
> v3->v4:
> - Renamed update_thermal_avg to update_thermal_load_avg.
> - Fixed typos as per review comments on mailing list
> - Reordered the code as per review comments
>
> kernel/sched/pelt.c | 13 +++++++++++++
> kernel/sched/pelt.h | 7 +++++++
> kernel/sched/sched.h | 1 +
> 3 files changed, 21 insertions(+)
>
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..3821069 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -353,6 +353,19 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
> return 0;
> }
>
> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
> +{
> + if (___update_load_sum(now, &rq->avg_thermal,
> + capacity,
> + capacity,
> + capacity)) {
> + ___update_load_avg(&rq->avg_thermal, 1, 1);
> + return 1;
> + }
> +
> + return 0;
> +}
> +
> #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> /*
> * irq:
> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> index afff644..c74226d 100644
> --- a/kernel/sched/pelt.h
> +++ b/kernel/sched/pelt.h
> @@ -6,6 +6,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
> int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
> int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
> int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
>
> #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> int update_irq_load_avg(struct rq *rq, u64 running);
> @@ -159,6 +160,12 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
> }
>
> static inline int
> +update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
> +{
> + return 0;
> +}
> +
> +static inline int
> update_irq_load_avg(struct rq *rq, u64 running)
> {
> return 0;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 0db2c1b..d5d82c8 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -944,6 +944,7 @@ struct rq {
> #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
> struct sched_avg avg_irq;
> #endif
> + struct sched_avg avg_thermal;
> u64 idle_stamp;
> u64 avg_idle;
>
> --
> 2.1.4
>
Powered by blists - more mailing lists