[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <175878730155.709179.4136465336489093001.tip-bot2@tip-bot2>
Date: Thu, 25 Sep 2025 08:01:41 -0000
From: "tip-bot2 for Menglong Dong" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Menglong Dong <dongml2@...natelecom.cn>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched: Fix some typos in include/linux/preempt.h
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 45b7f780739a3145aeef24d2dfa02517a6c82ed6
Gitweb: https://git.kernel.org/tip/45b7f780739a3145aeef24d2dfa02517a6c82ed6
Author: Menglong Dong <menglong8.dong@...il.com>
AuthorDate: Wed, 17 Sep 2025 14:09:16 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 25 Sep 2025 09:57:16 +02:00
sched: Fix some typos in include/linux/preempt.h
There are some typos in the comments of migrate in
include/linux/preempt.h:
elegible -> eligible
it's -> its
migirate_disable -> migrate_disable
abritrary -> arbitrary
Just fix them.
Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
include/linux/preempt.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 92237c3..1022021 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -372,7 +372,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
/*
* Migrate-Disable and why it is undesired.
*
- * When a preempted task becomes elegible to run under the ideal model (IOW it
+ * When a preempted task becomes eligible to run under the ideal model (IOW it
* becomes one of the M highest priority tasks), it might still have to wait
* for the preemptee's migrate_disable() section to complete. Thereby suffering
* a reduction in bandwidth in the exact duration of the migrate_disable()
@@ -387,7 +387,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
* - a lower priority tasks; which under preempt_disable() could've instantly
* migrated away when another CPU becomes available, is now constrained
* by the ability to push the higher priority task away, which might itself be
- * in a migrate_disable() section, reducing it's available bandwidth.
+ * in a migrate_disable() section, reducing its available bandwidth.
*
* IOW it trades latency / moves the interference term, but it stays in the
* system, and as long as it remains unbounded, the system is not fully
@@ -399,7 +399,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
* PREEMPT_RT breaks a number of assumptions traditionally held. By forcing a
* number of primitives into becoming preemptible, they would also allow
* migration. This turns out to break a bunch of per-cpu usage. To this end,
- * all these primitives employ migirate_disable() to restore this implicit
+ * all these primitives employ migrate_disable() to restore this implicit
* assumption.
*
* This is a 'temporary' work-around at best. The correct solution is getting
@@ -407,7 +407,7 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
* per-cpu locking or short preempt-disable regions.
*
* The end goal must be to get rid of migrate_disable(), alternatively we need
- * a schedulability theory that does not depend on abritrary migration.
+ * a schedulability theory that does not depend on arbitrary migration.
*
*
* Notes on the implementation.
Powered by blists - more mailing lists