[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1678239.jONkSWAMNZ@aspire.rjw.lan>
Date: Sat, 14 Oct 2017 03:02:56 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Aubrey Li <aubrey.li@...el.com>
Cc: tglx@...utronix.de, peterz@...radead.org, len.brown@...el.com,
ak@...ux.intel.com, tim.c.chen@...ux.intel.com, x86@...nel.org,
linux-kernel@...r.kernel.org, Aubrey Li <aubrey.li@...ux.intel.com>
Subject: Re: [RFC PATCH v2 8/8] cpuidle: introduce run queue average idle to make idle prediction
On Saturday, September 30, 2017 9:20:34 AM CEST Aubrey Li wrote:
> Introduce run queue average idle in scheduler as a factor to make
> idle prediction
>
> Signed-off-by: Aubrey Li <aubrey.li@...ux.intel.com>
> ---
> drivers/cpuidle/cpuidle.c | 12 ++++++++++++
> include/linux/cpuidle.h | 1 +
> kernel/sched/idle.c | 5 +++++
> 3 files changed, 18 insertions(+)
>
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index be56cea..9424a2d 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -364,6 +364,18 @@ void cpuidle_predict(void)
> return;
> }
>
> + /*
> + * check scheduler if the coming idle is likely a fast idle
> + */
> + idle_interval = div_u64(sched_idle_avg(), NSEC_PER_USEC);
And one more division ...
> + if (idle_interval < overhead_threshold) {
> + dev->idle_stat.fast_idle = true;
> + return;
> + }
> +
> + /*
> + * check the idle governor if the coming idle is likely a fast idle
> + */
> if (cpuidle_curr_governor->predict) {
> dev->idle_stat.predicted_us = cpuidle_curr_governor->predict();
> /*
> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> index 45b8264..387d72b 100644
> --- a/include/linux/cpuidle.h
> +++ b/include/linux/cpuidle.h
> @@ -234,6 +234,7 @@ static inline void cpuidle_use_deepest_state(bool enable)
> /* kernel/sched/idle.c */
> extern void sched_idle_set_state(struct cpuidle_state *idle_state);
> extern void default_idle_call(void);
> +extern u64 sched_idle_avg(void);
>
> #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a);
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index 8704f3c..d23b472 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -30,6 +30,11 @@ void sched_idle_set_state(struct cpuidle_state *idle_state)
> idle_set_state(this_rq(), idle_state);
> }
>
> +u64 sched_idle_avg(void)
> +{
> + return this_rq()->avg_idle;
> +}
> +
> static int __read_mostly cpu_idle_force_poll;
>
> void cpu_idle_poll_ctrl(bool enable)
>
You could easily combine this patch with the previous one IMO.
Powered by blists - more mailing lists