lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Dec 2021 11:09:00 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     xuhaifeng <xuhaifeng@...o.com>
Cc:     mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, linux-kernel@...r.kernel.org,
        Frederic Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH] sched: optimize __cond_resched_lock()

On Tue, Dec 21, 2021 at 09:52:28AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
> > if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
> > called twice, once via spin_unlock, once via preempt_schedule_common.
> > 
> > we can add one conditional, check TIF_NEED_RESCHED flag again,
> > to avoid this.
> 
> You can also make it more similar to __cond_resched() instead of making
> it more different.

Bah, sorry, had to wake up first :/

cond_resched_lock still needs to exist for PREEMPT because locks won't
magically release themselves.

Still don't much like the patch though, how's this work for you?

That's arguably the right thing to do work PREEMPT_DYNAMIC too.

---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 83872f95a1ea..79d3d5e15c4c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8192,6 +8192,11 @@ int __sched __cond_resched(void)
 	return 0;
 }
 EXPORT_SYMBOL(__cond_resched);
+#else
+static inline int __cond_resched(void)
+{
+	return 0;
+}
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
@@ -8219,9 +8224,7 @@ int __cond_resched_lock(spinlock_t *lock)
 
 	if (spin_needbreak(lock) || resched) {
 		spin_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		spin_lock(lock);
@@ -8239,9 +8242,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		read_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		read_lock(lock);
@@ -8259,9 +8260,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
 
 	if (rwlock_needbreak(lock) || resched) {
 		write_unlock(lock);
-		if (resched)
-			preempt_schedule_common();
-		else
+		if (!__cond_resched())
 			cpu_relax();
 		ret = 1;
 		write_lock(lock);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ