lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Jul 2018 17:50:34 +0200
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Joe Korty <joe.korty@...current-rt.com>
Cc:     Julia Cartwright <julia@...com>, tglx@...utronix.de,
        rostedt@...dmis.org, linux-rt-users@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH RT] sched/migrate_disable: fallback to preempt_disable()
 instead barrier()

migrate_disable() does nothing !SMP && !RT. This is bad for two reasons:
- The futex code relies on the fact migrate_disable() is part of spin_lock().
  There is a workaround for the !in_atomic() case in migrate_disable() which
  work-arounds the different ordering (non-atomic lock and atomic unlock).

- we have a few instances where preempt_disable() is replaced with
  migrate_disable().

For both cases it is bad if migrate_disable() ends up as barrier() instead of
preempt_disable(). Let migrate_disable() fallback to preempt_disable().

Cc: stable-rt@...r.kernel.org
Reported-by: joe.korty@...current-rt.com
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
 include/linux/preempt.h | 4 ++--
 kernel/sched/core.c     | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 043e431a7e8e..d46688d521e6 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -241,8 +241,8 @@ static inline int __migrate_disabled(struct task_struct *p)
 }
 
 #else
-#define migrate_disable()		barrier()
-#define migrate_enable()		barrier()
+#define migrate_disable()		preempt_disable()
+#define migrate_enable()		preempt_enable()
 static inline int __migrate_disabled(struct task_struct *p)
 {
 	return 0;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ac3fb8495bd5..626a62218518 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7326,6 +7326,7 @@ void migrate_disable(void)
 #endif
 
 	p->migrate_disable++;
+	preempt_disable();
 }
 EXPORT_SYMBOL(migrate_disable);
 
@@ -7349,6 +7350,7 @@ void migrate_enable(void)
 
 	WARN_ON_ONCE(p->migrate_disable <= 0);
 	p->migrate_disable--;
+	preempt_enable();
 }
 EXPORT_SYMBOL(migrate_enable);
 #endif
-- 
2.18.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ