[<prev] [next>] [day] [month] [year] [list]
Message-ID: <169501808389.27769.10252810804131112374.tip-bot2@tip-bot2>
Date: Mon, 18 Sep 2023 06:21:23 -0000
From: "tip-bot2 for Elliot Berman" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Elliot Berman <quic_eberman@...cinc.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/core: Remove ifdeffery for saved_state
The following commit has been merged into the sched/core branch of tip:
Commit-ID: fbaa6a181a4b1886cbf4214abdf9a2df68471510
Gitweb: https://git.kernel.org/tip/fbaa6a181a4b1886cbf4214abdf9a2df68471510
Author: Elliot Berman <quic_eberman@...cinc.com>
AuthorDate: Fri, 08 Sep 2023 15:49:15 -07:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Mon, 18 Sep 2023 08:13:57 +02:00
sched/core: Remove ifdeffery for saved_state
In preparation for freezer to also use saved_state, remove the
CONFIG_PREEMPT_RT compilation guard around saved_state.
On the arm64 platform I tested which did not have CONFIG_PREEMPT_RT,
there was no statistically significant deviation by applying this patch.
Test methodology:
perf bench sched message -g 40 -l 40
Signed-off-by: Elliot Berman <quic_eberman@...cinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/linux/sched.h | 2 --
kernel/sched/core.c | 8 ++------
2 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 77f01ac..dc37ae7 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -750,10 +750,8 @@ struct task_struct {
#endif
unsigned int __state;
-#ifdef CONFIG_PREEMPT_RT
/* saved state for "spinlock sleepers" */
unsigned int saved_state;
-#endif
/*
* This begins the randomizable portion of task_struct. Only
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f39482d..49541e3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2232,23 +2232,20 @@ int __task_state_match(struct task_struct *p, unsigned int state)
if (READ_ONCE(p->__state) & state)
return 1;
-#ifdef CONFIG_PREEMPT_RT
if (READ_ONCE(p->saved_state) & state)
return -1;
-#endif
+
return 0;
}
static __always_inline
int task_state_match(struct task_struct *p, unsigned int state)
{
-#ifdef CONFIG_PREEMPT_RT
/*
* Serialize against current_save_and_set_rtlock_wait_state() and
* current_restore_rtlock_saved_state().
*/
guard(raw_spinlock_irq)(&p->pi_lock);
-#endif
return __task_state_match(p, state);
}
@@ -4038,7 +4035,6 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
*success = !!(match = __task_state_match(p, state));
-#ifdef CONFIG_PREEMPT_RT
/*
* Saved state preserves the task state across blocking on
* an RT lock. If the state matches, set p::saved_state to
@@ -4054,7 +4050,7 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
*/
if (match < 0)
p->saved_state = TASK_RUNNING;
-#endif
+
return match > 0;
}
Powered by blists - more mailing lists