lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Sep 2016 18:24:19 +0200
From:   dpervushin@...il.com
To:     linux-kernel@...r.kernel.org
Cc:     Nicolas Pitre <nico@...xnic.net>, Colin Cross <ccross@...roid.com>,
        Kevin Hilman <khilman@...libre.com>,
        Thomas Gleixner <tglx@...utronics.de>,
        dmitry pervushin <dpervushin@...il.com>
Subject: [PATCH] kernel/cpu_pm: replace rwlock with raw_spinlock

From: dmitry pervushin <dpervushin@...il.com>

Recently I faced the BUG in cpuidle driver, "scheduling while atomic"

The reason of this BUG is that rwlock behavior gets changed by
RT patches: cpu_pm_enter is an atomic function, designed to execute
with interrupts disabled, thus it shouldn't sleep. However, with
RT patches, this atomic function can sleep in rwlocks.

The reason to change from spinning inside rwlock to sleeping mutex is
described by Steven Rostedt's in [1]:

   rwlocks are problematic, because the writer has to wait for all
   readers and readers can be added while the writer is waiting.
   Now at least it’s a FIFO, but this creates deadlocks because
   before the readers wouldn’t block so you could have lock inversion
   without problems. lockdep doesn’t (yet) detect this scenario.
   rwlocks should be replaced by RCU (where possible).

That explains its "evilness", and suggest to replace with RCU, but in
case of cpu_idle it is better to replace rwlock with spinlock (well, at
least the patch is shorter and cleaner) -- there is no single pointer
that we can read/copy/update, we should protect the list of notifications

[1] https://mindlinux.wordpress.com/2012/11/06/elc-e-2012-the-preempt_rt-patch-steven-rostedt/
[2] http://elinux.org/Realtime_Preemption#Conversion_of_Spinlocks_to_Mutexes

Signed-off-by: dmitry pervushin <dpervushin@...il.com>
---
 kernel/cpu_pm.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 009cc9a..8a57118 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -22,7 +22,7 @@
 #include <linux/spinlock.h>
 #include <linux/syscore_ops.h>
 
-static DEFINE_RWLOCK(cpu_pm_notifier_lock);
+static raw_spinlock_t cpu_pm_notifier_lock;
 static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain);
 
 static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
@@ -50,9 +50,9 @@ int cpu_pm_register_notifier(struct notifier_block *nb)
 	unsigned long flags;
 	int ret;
 
-	write_lock_irqsave(&cpu_pm_notifier_lock, flags);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb);
-	write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
@@ -72,9 +72,9 @@ int cpu_pm_unregister_notifier(struct notifier_block *nb)
 	unsigned long flags;
 	int ret;
 
-	write_lock_irqsave(&cpu_pm_notifier_lock, flags);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
-	write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
@@ -99,8 +99,9 @@ int cpu_pm_enter(void)
 {
 	int nr_calls;
 	int ret = 0;
+	unsigned long flags;
 
-	read_lock(&cpu_pm_notifier_lock);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
 	if (ret)
 		/*
@@ -108,7 +109,7 @@ int cpu_pm_enter(void)
 		 * PM entry who are notified earlier to prepare for it.
 		 */
 		cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
@@ -129,10 +130,11 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
 int cpu_pm_exit(void)
 {
 	int ret;
+	unsigned long flags;
 
-	read_lock(&cpu_pm_notifier_lock);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
@@ -158,8 +160,9 @@ int cpu_cluster_pm_enter(void)
 {
 	int nr_calls;
 	int ret = 0;
+	unsigned long flags;
 
-	read_lock(&cpu_pm_notifier_lock);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
 	if (ret)
 		/*
@@ -167,7 +170,7 @@ int cpu_cluster_pm_enter(void)
 		 * PM entry who are notified earlier to prepare for it.
 		 */
 		cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
@@ -191,10 +194,11 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter);
 int cpu_cluster_pm_exit(void)
 {
 	int ret;
+	unsigned long flags;
 
-	read_lock(&cpu_pm_notifier_lock);
+	raw_spin_lock_irqsave(&cpu_pm_notifier_lock, flags);
 	ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
+	raw_spin_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
 
 	return ret;
 }
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ