[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251127050353.1089724-3-shijie@os.amperecomputing.com>
Date: Thu, 27 Nov 2025 13:03:52 +0800
From: Huang Shijie <shijie@...amperecomputing.com>
To: mingo@...hat.com,
peterz@...radead.org,
juri.lelli@...hat.com,
vincent.guittot@...aro.org
Cc: patches@...erecomputing.com,
cl@...ux.com,
Shubhang@...amperecomputing.com,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
linux-kernel@...r.kernel.org,
vschneid@...hat.com,
vineethr@...ux.ibm.com,
Huang Shijie <shijie@...amperecomputing.com>
Subject: [PATCH v2 2/3] sched: update the rq->avg_idle when a task is moved to an idle CPU
In the newidle balance, the rq->idle_stamp may set to a non-zero value
if it cannot pull any task.
In the wakeup, it will detect the rq->idle_stamp, and updates
the rq->avg_idle, then ends the CPU idle status by setting rq->idle_stamp
to zero.
Besides the wakeup, current code does not end the CPU idle status
when a task is moved to the idle CPU, such as fork/clone, execve,
or other cases.
This patch introduces a helper: update_rq_avg_idle().
And uses it in activate_task(), so it will update the rq->avg_idle
when a task is moved to an idle CPU at:
-- wakeup
-- fork/clone
-- execve
-- idle balance
-- other cases
Signed-off-by: Huang Shijie <shijie@...amperecomputing.com>
---
kernel/sched/core.c | 28 ++++++++++++++++------------
kernel/sched/sched.h | 2 ++
2 files changed, 18 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0c4ff93eeb78..a946f3604548 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2135,6 +2135,7 @@ void activate_task(struct rq *rq, struct task_struct *p, int flags)
sched_mm_cid_migrate_to(rq, p);
enqueue_task(rq, p, flags);
+ update_rq_avg_idle(rq);
WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
ASSERT_EXCLUSIVE_WRITER(p->on_rq);
@@ -2412,6 +2413,21 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
return cpu_online(cpu);
}
+void update_rq_avg_idle(struct rq *rq)
+{
+ if (rq->idle_stamp) {
+ u64 delta = rq_clock(rq) - rq->idle_stamp;
+ u64 max = 2*rq->max_idle_balance_cost;
+
+ update_avg(&rq->avg_idle, delta);
+
+ if (rq->avg_idle > max)
+ rq->avg_idle = max;
+
+ rq->idle_stamp = 0;
+ }
+}
+
/*
* This is how migration works:
*
@@ -3645,18 +3661,6 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
p->sched_class->task_woken(rq, p);
rq_repin_lock(rq, rf);
}
-
- if (rq->idle_stamp) {
- u64 delta = rq_clock(rq) - rq->idle_stamp;
- u64 max = 2*rq->max_idle_balance_cost;
-
- update_avg(&rq->avg_idle, delta);
-
- if (rq->avg_idle > max)
- rq->avg_idle = max;
-
- rq->idle_stamp = 0;
- }
}
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b419a4d98461..0e8aef1cef96 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -242,6 +242,8 @@ static inline void update_avg(u64 *avg, u64 sample)
*avg += diff / 8;
}
+extern void update_rq_avg_idle(struct rq *);
+
/*
* Shifting a value by an exponent greater *or equal* to the size of said value
* is UB; cap at size-1.
--
2.40.1
Powered by blists - more mailing lists