lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 30 Sep 2017 15:20:34 +0800 From: Aubrey Li <aubrey.li@...el.com> To: tglx@...utronix.de, peterz@...radead.org, rjw@...ysocki.net, len.brown@...el.com, ak@...ux.intel.com, tim.c.chen@...ux.intel.com Cc: x86@...nel.org, linux-kernel@...r.kernel.org, Aubrey Li <aubrey.li@...el.com>, Aubrey Li <aubrey.li@...ux.intel.com> Subject: [RFC PATCH v2 8/8] cpuidle: introduce run queue average idle to make idle prediction Introduce run queue average idle in scheduler as a factor to make idle prediction Signed-off-by: Aubrey Li <aubrey.li@...ux.intel.com> --- drivers/cpuidle/cpuidle.c | 12 ++++++++++++ include/linux/cpuidle.h | 1 + kernel/sched/idle.c | 5 +++++ 3 files changed, 18 insertions(+) diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c index be56cea..9424a2d 100644 --- a/drivers/cpuidle/cpuidle.c +++ b/drivers/cpuidle/cpuidle.c @@ -364,6 +364,18 @@ void cpuidle_predict(void) return; } + /* + * check scheduler if the coming idle is likely a fast idle + */ + idle_interval = div_u64(sched_idle_avg(), NSEC_PER_USEC); + if (idle_interval < overhead_threshold) { + dev->idle_stat.fast_idle = true; + return; + } + + /* + * check the idle governor if the coming idle is likely a fast idle + */ if (cpuidle_curr_governor->predict) { dev->idle_stat.predicted_us = cpuidle_curr_governor->predict(); /* diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h index 45b8264..387d72b 100644 --- a/include/linux/cpuidle.h +++ b/include/linux/cpuidle.h @@ -234,6 +234,7 @@ static inline void cpuidle_use_deepest_state(bool enable) /* kernel/sched/idle.c */ extern void sched_idle_set_state(struct cpuidle_state *idle_state); extern void default_idle_call(void); +extern u64 sched_idle_avg(void); #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a); diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 8704f3c..d23b472 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -30,6 +30,11 @@ void sched_idle_set_state(struct cpuidle_state *idle_state) idle_set_state(this_rq(), idle_state); } +u64 sched_idle_avg(void) +{ + return this_rq()->avg_idle; +} + static int __read_mostly cpu_idle_force_poll; void cpu_idle_poll_ctrl(bool enable) -- 2.7.4
Powered by blists - more mailing lists