lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhjy2rzntbo.mognet@arm.com>
Date:   Tue, 17 Mar 2020 10:56:11 +0000
From:   Valentin Schneider <valentin.schneider@....com>
To:     Daniel Lezcano <daniel.lezcano@...aro.org>
Cc:     peterz@...radead.org, mingo@...hat.com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com,
        linux-kernel@...r.kernel.org, qais.yousef@....com
Subject: Re: [PATCH V2] sched: fair: Use the earliest break even


Hi Daniel,

One more comment on the break even itself, ignoring the rest:

On Wed, Mar 11 2020, Daniel Lezcano wrote:
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index b743bf38f08f..3342e7bae072 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -19,7 +19,13 @@ extern char __cpuidle_text_start[], __cpuidle_text_end[];
>   */
>  void sched_idle_set_state(struct cpuidle_state *idle_state)
>  {
> -	idle_set_state(this_rq(), idle_state);
> +	struct rq *rq = this_rq();
> +
> +	idle_set_state(rq, idle_state);
> +
> +	if (idle_state)
> +		idle_set_break_even(rq, ktime_get_ns() +
> +				    idle_state->exit_latency_ns);

I'm not sure I follow why we go for entry time + exit latency. If this
is based on the minimum residency, shouldn't this be something depending
on the entry latency? i.e. something like

  break_even = now + entry_latency + idling_time
                     \_________________________/
                            min-residency

or am I missing something?

>  }
>
>  static int __read_mostly cpu_idle_force_poll;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ