[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210923165357.991262778@linutronix.de>
Date: Thu, 23 Sep 2021 18:54:37 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Sebastian Siewior <bigeasy@...utronix.de>
Subject: [patch 2/8] sched: Make cond_resched_*lock() variants consistent vs.
might_sleep()
Commit 3427445afd26 ("sched: Exclude cond_resched() from nested sleep
test") removed the task state check of __might_sleep() for
cond_resched_lock() because cond_resched_lock() is not a voluntary
scheduling point which blocks. It's a preemption point which requires the
lock holder to release the spin lock.
The same rationale applies to cond_resched_rwlock_read/write(), but those
were not touched.
Make it consistent and use the non-state checking __might_resched() there
as well.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
include/linux/sched.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2054,14 +2054,14 @@ extern int __cond_resched_rwlock_write(r
__cond_resched_lock(lock); \
})
-#define cond_resched_rwlock_read(lock) ({ \
- __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
- __cond_resched_rwlock_read(lock); \
+#define cond_resched_rwlock_read(lock) ({ \
+ __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
+ __cond_resched_rwlock_read(lock); \
})
-#define cond_resched_rwlock_write(lock) ({ \
- __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
- __cond_resched_rwlock_write(lock); \
+#define cond_resched_rwlock_write(lock) ({ \
+ __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
+ __cond_resched_rwlock_write(lock); \
})
static inline void cond_resched_rcu(void)
Powered by blists - more mailing lists