lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220713040430.25778-9-zhouchengming@bytedance.com>
Date:   Wed, 13 Jul 2022 12:04:28 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        vschneid@...hat.com
Cc:     linux-kernel@...r.kernel.org,
        Chengming Zhou <zhouchengming@...edance.com>
Subject: [PATCH v2 08/10] sched/fair: fix load tracking for new forked !fair task

New forked !fair task will set its sched_avg last_update_time to
the pelt_clock of cfs_rq, after a while in switched_to_fair():

switched_to_fair
  attach_task_cfs_rq
    attach_entity_cfs_rq
      update_load_avg
        __update_load_avg_se(now, cfs_rq, se)

the delta (now - sa->last_update_time) will contribute/decay sched_avg
depends on the task running/runnable status at that time.

This patch don't set sched_avg last_update_time of new forked !fair
task, leave it to 0. So later in update_load_avg(), we don't need to
contribute/decay the wrong delta (now - sa->last_update_time).

Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
---
 kernel/sched/fair.c | 18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 51fc20c161a3..50f65a2ede32 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -849,22 +849,8 @@ void post_init_entity_util_avg(struct task_struct *p)
 
 	sa->runnable_avg = sa->util_avg;
 
-	if (p->sched_class != &fair_sched_class) {
-		/*
-		 * For !fair tasks do:
-		 *
-		update_cfs_rq_load_avg(now, cfs_rq);
-		attach_entity_load_avg(cfs_rq, se);
-		switched_from_fair(rq, p);
-		 *
-		 * such that the next switched_to_fair() has the
-		 * expected state.
-		 */
-		se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
-		return;
-	}
-
-	attach_entity_cfs_rq(se);
+	if (p->sched_class == &fair_sched_class)
+		attach_entity_cfs_rq(se);
 }
 
 #else /* !CONFIG_SMP */
-- 
2.36.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ