lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221103120720.39873-2-zhangqiao22@huawei.com>
Date:   Thu, 3 Nov 2022 20:07:19 +0800
From:   Zhang Qiao <zhangqiao22@...wei.com>
To:     <mingo@...hat.com>, <peterz@...radead.org>,
        <juri.lelli@...hat.com>, <vincent.guittot@...aro.org>,
        <linux-kernel@...r.kernel.org>
CC:     <dietmar.eggemann@....com>, <rostedt@...dmis.org>,
        <bsegall@...gle.com>, <mgorman@...e.de>, <bristot@...hat.com>,
        <vschneid@...hat.com>, <brauner@...nel.org>,
        <yusongping@...wei.com>, Zhang Qiao <zhangqiao22@...wei.com>
Subject: [PATCH v2 1/2] sched: Init new task's vruntime after select cpu

When create a new task, we initialize vruntime of the new task
at sched_cgroup_fork(). However, this action is executed too
early and may be incorrect, because it use current cpu to
init the vruntime, but the new task actually runs on the
cpu assigned at wake_up_new_task().

So the patch call task_fork() after select fork cpu and use
the ready cpu(the child will run on it) init the new task.

Signed-off-by: Zhang Qiao <zhangqiao22@...wei.com>
---
v1->v2:
	make sched_task_fork static.

 kernel/sched/core.c | 7 ++++++-
 kernel/sched/fair.c | 7 +------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e4ce124ec701..21481bd22bdf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4627,9 +4627,13 @@ void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
 	 * so use __set_task_cpu().
 	 */
 	__set_task_cpu(p, smp_processor_id());
+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+}
+
+static void sched_task_fork(struct task_struct *p)
+{
 	if (p->sched_class->task_fork)
 		p->sched_class->task_fork(p);
-	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 }
 
 void sched_post_fork(struct task_struct *p)
@@ -4682,6 +4686,7 @@ void wake_up_new_task(struct task_struct *p)
 #endif
 	rq = __task_rq_lock(p, &rf);
 	update_rq_clock(rq);
+	sched_task_fork(p);
 	post_init_entity_util_avg(p);
 
 	activate_task(rq, p, ENQUEUE_NOCLOCK);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4a0b8bd941c..34845d425180 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11603,12 +11603,8 @@ static void task_fork_fair(struct task_struct *p)
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se, *curr;
 	struct rq *rq = this_rq();
-	struct rq_flags rf;
 
-	rq_lock(rq, &rf);
-	update_rq_clock(rq);
-
-	cfs_rq = task_cfs_rq(current);
+	cfs_rq = task_cfs_rq(p);
 	curr = cfs_rq->curr;
 	if (curr) {
 		update_curr(cfs_rq);
@@ -11626,7 +11622,6 @@ static void task_fork_fair(struct task_struct *p)
 	}
 
 	se->vruntime -= cfs_rq->min_vruntime;
-	rq_unlock(rq, &rf);
 }
 
 /*
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ