[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0iGv_1d3BT0HowLgecOfhNHNQdOwH6Kef5WE4-zeBbp2Q@mail.gmail.com>
Date: Tue, 22 Jun 2021 14:33:59 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Lukasz Luba <lukasz.luba@....com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Linux PM <linux-pm@...r.kernel.org>,
Amit Kucheria <amitk@...nel.org>,
"Zhang, Rui" <rui.zhang@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Chris Redpath <Chris.Redpath@....com>, Beata.Michalska@....com,
Viresh Kumar <viresh.kumar@...aro.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Amit Kachhap <amit.kachhap@...il.com>
Subject: Re: [RFC PATCH 2/4] cpuidle: Add Active Stats calls tracking idle entry/exit
On Tue, Jun 22, 2021 at 9:59 AM Lukasz Luba <lukasz.luba@....com> wrote:
>
> The Active Stats framework tracks and accounts the activity of the CPU
> for each performance level. It accounts the real residency,
No, it doesn't. It just measures the time between the entry and exit
and that's not the real residency (because it doesn't take the exit
latency into account, for example).
> when the CPU was not idle, at a given performance level. This patch adds needed calls
> which provide the CPU idle entry/exit events to the Active Stats
> framework.
And it adds overhead to overhead-sensitive code.
AFAICS, some users of that code will not really get the benefit, so
adding the overhead to it is questionable.
First, why is the existing instrumentation in the idle loop insufficient?
Second, why do you need to add locking to this code?
> Signed-off-by: Lukasz Luba <lukasz.luba@....com>
> ---
> drivers/cpuidle/cpuidle.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index ef2ea1b12cd8..24a33c6c4a62 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -8,6 +8,7 @@
> * This code is licenced under the GPL.
> */
>
> +#include <linux/active_stats.h>
> #include <linux/clockchips.h>
> #include <linux/kernel.h>
> #include <linux/mutex.h>
> @@ -231,6 +232,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> trace_cpu_idle(index, dev->cpu);
> time_start = ns_to_ktime(local_clock());
>
> + active_stats_cpu_idle_enter(time_start);
> +
> stop_critical_timings();
> if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
> rcu_idle_enter();
> @@ -243,6 +246,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> time_end = ns_to_ktime(local_clock());
> trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu);
>
> + active_stats_cpu_idle_exit(time_end);
> +
> /* The cpu is no longer idle or about to enter idle. */
> sched_idle_set_state(NULL);
>
> --
> 2.17.1
>
Powered by blists - more mailing lists