lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 13 Jul 2022 12:04:29 +0800 From: Chengming Zhou <zhouchengming@...edance.com> To: mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, vschneid@...hat.com Cc: linux-kernel@...r.kernel.org, Chengming Zhou <zhouchengming@...edance.com> Subject: [PATCH v2 09/10] sched/fair: stop load tracking when task switched_from_fair() The same reason as the previous commit, if we don't reset the sched_avg last_update_time to 0, after a while in switched_to_fair(): switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se) The delta (now - sa->last_update_time) will wrongly contribute/decay sched_avg depends on the task running/runnable status at that time. This patch reset it's sched_avg last_update_time to 0, stop load tracking for !fair task, later in switched_to_fair() -> update_load_avg(), we can use its saved sched_avg. Signed-off-by: Chengming Zhou <zhouchengming@...edance.com> --- kernel/sched/fair.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 50f65a2ede32..576028f5a09e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11552,6 +11552,11 @@ static void attach_task_cfs_rq(struct task_struct *p) static void switched_from_fair(struct rq *rq, struct task_struct *p) { detach_task_cfs_rq(p); + +#ifdef CONFIG_SMP + /* Stop load tracking for !fair task */ + p->se.avg.last_update_time = 0; +#endif } static void switched_to_fair(struct rq *rq, struct task_struct *p) -- 2.36.1
Powered by blists - more mailing lists