[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <164250468946.16921.3796443718413078010.tip-bot2@tip-bot2>
Date: Tue, 18 Jan 2022 11:18:09 -0000
From: "tip-bot2 for Peter Zijlstra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: xuhaifeng <xuhaifeng@...o.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] sched: Avoid double preemption in __cond_resched_*lock*()
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 7e406d1ff39b8ee574036418a5043c86723170cf
Gitweb: https://git.kernel.org/tip/7e406d1ff39b8ee574036418a5043c86723170cf
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Sat, 25 Dec 2021 01:04:57 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 18 Jan 2022 12:09:59 +01:00
sched: Avoid double preemption in __cond_resched_*lock*()
For PREEMPT/DYNAMIC_PREEMPT the *_unlock() will already trigger a
preemption, no point in then calling preempt_schedule_common()
*again*.
Use _cond_resched() instead, since this is a NOP for the preemptible
configs while it provide a preemption point for the others.
Reported-by: xuhaifeng <xuhaifeng@...o.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/YcGnvDEYBwOiV0cR@hirez.programming.kicks-ass.net
---
kernel/sched/core.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0d2ab2a..56b428c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8218,9 +8218,7 @@ int __cond_resched_lock(spinlock_t *lock)
if (spin_needbreak(lock) || resched) {
spin_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
spin_lock(lock);
@@ -8238,9 +8236,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
if (rwlock_needbreak(lock) || resched) {
read_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
read_lock(lock);
@@ -8258,9 +8254,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
if (rwlock_needbreak(lock) || resched) {
write_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
write_lock(lock);
Powered by blists - more mailing lists