[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250220093257.9380-5-kprateek.nayak@amd.com>
Date: Thu, 20 Feb 2025 09:32:39 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Valentin Schneider <vschneid@...hat.com>, "Ben
Segall" <bsegall@...gle.com>, Thomas Gleixner <tglx@...utronix.de>, "Andy
Lutomirski" <luto@...nel.org>, <linux-kernel@...r.kernel.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>, "Sebastian Andrzej
Siewior" <bigeasy@...utronix.de>, Clark Williams <clrkwllms@...nel.org>,
<linux-rt-devel@...ts.linux.dev>, Tejun Heo <tj@...nel.org>, "Frederic
Weisbecker" <frederic@...nel.org>, Barret Rhoden <brho@...gle.com>, "Petr
Mladek" <pmladek@...e.com>, Josh Don <joshdon@...gle.com>, Qais Yousef
<qyousef@...alina.io>, "Paul E. McKenney" <paulmck@...nel.org>, David Vernet
<dvernet@...a.com>, K Prateek Nayak <kprateek.nayak@....com>, "Gautham R.
Shenoy" <gautham.shenoy@....com>, Swapnil Sapkal <swapnil.sapkal@....com>
Subject: [RFC PATCH 04/22] [PoC] kernel/sched: Inititalize "kernel_cs_count" for new tasks
Since only archs that select GENERIC_ENTRY can track a syscall entry and
exit for userspace tasks, appropriately set the "kernel_cs_count"
depending on the arch.
For arch that select GENERIC_ENTRY, the "kernel_cs_count" is initialized
the 1 since the task starts running by exiting out of syscall without a
matching syscall entry.
For any future fine-grained tracking, the initial count must be adjusted
appropriately.
XXX: A kernel thread will always appear to be running a kernel critical
section. Is this desirable?
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
init/init_task.c | 5 ++++-
kernel/sched/core.c | 6 +++++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/init/init_task.c b/init/init_task.c
index e557f622bd90..90abbd248c6a 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -88,7 +88,10 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.fn = do_no_restart_syscall,
},
.se = {
- .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+#ifdef CONFIG_CFS_BANDWIDTH
+ .kernel_cs_count = (IS_ENABLED(CONFIG_GENERIC_ENTRY)) ? 1 : 0,
+#endif
},
.rt = {
.run_list = LIST_HEAD_INIT(init_task.rt.run_list),
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 165c90ba64ea..0851cdad9242 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4493,7 +4493,11 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
#ifdef CONFIG_FAIR_GROUP_SCHED
p->se.cfs_rq = NULL;
-#endif
+#ifdef CONFIG_CFS_BANDWIDTH
+ /* Only the arch that select GENERIC_ENTRY can defer throttling */
+ p->se.kernel_cs_count = (IS_ENABLED(CONFIG_GENERIC_ENTRY)) ? 1 : 0;
+#endif /* CONFIG_CFS_BANDWIDTH */
+#endif /* CONFIG_FAIR_GROUP_SCHED */
#ifdef CONFIG_SCHEDSTATS
/* Even if schedstat is disabled, there should not be garbage */
--
2.43.0
Powered by blists - more mailing lists