lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Apr 2022 15:05:31 +0300
From:   Artem Bityutskiy <dedekind1@...il.com>
To:     Lukasz Luba <lukasz.luba@....com>, linux-kernel@...r.kernel.org
Cc:     dietmar.eggemann@....com, viresh.kumar@...aro.org,
        rafael@...nel.org, daniel.lezcano@...aro.org, amitk@...nel.org,
        rui.zhang@...el.com, amit.kachhap@...il.com,
        linux-pm@...r.kernel.org
Subject: Re: [RFC PATCH v3 2/5] cpuidle: Add Cpufreq Active Stats calls
 tracking idle entry/exit

Hi Lukasz,

On Wed, 2022-04-06 at 23:08 +0100, Lukasz Luba wrote:
> @@ -231,6 +232,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct
> cpuidle_driver *drv,
>         trace_cpu_idle(index, dev->cpu);
>         time_start = ns_to_ktime(local_clock());
>  
> +       cpufreq_active_stats_cpu_idle_enter(time_start);
> +
>         stop_critical_timings();
>         if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
>                 rcu_idle_enter();
> @@ -243,6 +246,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct
> cpuidle_driver *drv,
>         time_end = ns_to_ktime(local_clock());
>         trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu);
>  
> +       cpufreq_active_stats_cpu_idle_exit(time_end);
> +

At this point the interrupts are still disabled, and they get enabled later. So
the more code you add here and the longer it executes, the longer you delay the
interrupts. Therefore, you are effectively increasing IRQ latency from idle by
adding more code here.

How much? I do not know, depends on how much code you need to execute. But the
amount of code in functions like this tends to increase over time.

So the risk is that we'll keep making 'cpufreq_active_stats_cpu_idle_exit()',
and (may be unintentionally) increase idle interrupt latency.

This is not ideal.

We use the 'wult' tool (https://github.com/intel/wult) to measure C-states
latency and interrupt latency on Intel platforms, and for fast C-states like
Intel C1, we can see that even the current code between C-state exit and
interrupt re-enabled adds measurable overhead.

I am worried about adding more stuff here.

Please, consider getting the stats after interrupts are re-enabled. You may lose
some "precision" because of that, but it is probably overall better that adding
to idle interrupt latency.

>         /* The cpu is no longer idle or about to enter idle. */
>         sched_idle_set_state(NULL);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ