[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFraE=JYehuLUER+tL=fyPbUwZDvu2PB4ebnYk5f6atWeA@mail.gmail.com>
Date: Wed, 18 Mar 2020 11:49:30 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kevin Hilman <khilman@...nel.org>
Subject: Re: [PATCH RFC] cpuidle: consolidate calls to time capture
On Mon, 16 Mar 2020 at 22:10, Daniel Lezcano <daniel.lezcano@...aro.org> wrote:
>
> A few years ago, we changed the code in cpuidle to replace ktime_get()
> by a local_clock() to get rid of potential seq lock in the path and an
> extra latency.
>
> Meanwhile, the code evolved and we are getting the time in some other
> places like the power domain governor and in the future break even
> deadline proposal.
>
> Unfortunately, as the time must be compared across the CPU, we have no
> other option than using the ktime_get() again. Hopefully, we can
> factor out all the calls to local_clock() and ktime_get() into a
> single one when the CPU is entering idle as the value will be reuse in
> different places.
>
> We can assume the time to go through the code path distance is small
> enough between ktime_get() call in the cpuidle_enter() function and
> the other users inspecting the value.
>
> Signed-off-by: Daniel Lezcano <daniel.lezcano@...aro.org>
> ---
> drivers/base/power/domain_governor.c | 4 +++-
> drivers/cpuidle/cpuidle.c | 6 +++---
> include/linux/cpuidle.h | 1 +
> 3 files changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
> index daa8c7689f7e..bee97f7b7b8d 100644
> --- a/drivers/base/power/domain_governor.c
> +++ b/drivers/base/power/domain_governor.c
> @@ -279,8 +279,10 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
> }
> }
>
> + dev = per_cpu(cpuidle_devices, smp_processor_id());
> +
> /* The minimum idle duration is from now - until the next wakeup. */
> - idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, ktime_get()));
> + idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, dev->idle_start));
> if (idle_duration_ns <= 0)
> return false;
>
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index c149d9e20dfd..9db14581759b 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -206,7 +206,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
>
> struct cpuidle_state *target_state = &drv->states[index];
> bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP);
> - ktime_t time_start, time_end;
> + ktime_t time_end;
>
> /*
> * Tell the time framework to switch to a broadcast timer because our
> @@ -228,14 +228,14 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> sched_idle_set_state(target_state);
>
> trace_cpu_idle_rcuidle(index, dev->cpu);
> - time_start = ns_to_ktime(local_clock());
> + dev->idle_start = ktime_get();
I fully agree with Rafael, this is bad for all cases where the
local_clock is sufficient.
To avoid the ktime_get() in the cpu_power_down_ok() for the genpd
governor, I think a better option could be to use the
"ts->idle_entrytime", that has been set in tick_nohz_start_idle().
>
> stop_critical_timings();
> entered_state = target_state->enter(dev, drv, index);
> start_critical_timings();
>
> sched_clock_idle_wakeup_event();
> - time_end = ns_to_ktime(local_clock());
> + time_end = ktime_get();
> trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>
> /* The cpu is no longer idle or about to enter idle. */
> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> index ec2ef63771f0..112494658e01 100644
> --- a/include/linux/cpuidle.h
> +++ b/include/linux/cpuidle.h
> @@ -89,6 +89,7 @@ struct cpuidle_device {
> unsigned int poll_time_limit:1;
> unsigned int cpu;
> ktime_t next_hrtimer;
> + ktime_t idle_start;
>
> int last_state_idx;
> u64 last_residency_ns;
> --
> 2.17.1
>
Kind regards
Uffe
Powered by blists - more mailing lists