lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20080307140635.13543.98126.stgit@novell1.haskins.net>
Date:	Fri, 07 Mar 2008 09:06:35 -0500
From:	Gregory Haskins <ghaskins@...ell.com>
To:	mingo@...e.hu, rostedt@...dmis.org, tglx@...utronix.de,
	linux-rt-users@...r.kernel.org
Cc:	ghaskins@...ell.com, linux-kernel@...r.kernel.org
Subject: [PATCH] RT: fix spinlock preemption feature when PREEMPT_RT is enabled

kernel/spinlock.c implements two versions of spinlock wrappers around
the arch-specific implementations:

1) A simple passthrough which implies disabled preemption while spinning

2) A "preemptible waiter" version which uses trylock.

Currently, PREEMPT && SMP will turn on the preemptible feature, and
lockdep or PREEMPT_RT will disable it.  Disabling the feature for
lockdep makes perfect sense, but PREEMPT_RT is counter-intuitive.  My
guess is that this was inadvertent, so this patch once again enables
the feature for PREEMPT_RT.

(Since PREEMPT is set for PREEMPT_RT, we simply get rid of the extra
condition).

I have tested the PREEMPT_RT kernel with this patch and all seems well.
Therefore, if there *is* an issue with running preemptible versions of
these spinlocks under PREEMPT_RT, it is not immediately apparent why. 

Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
---

 kernel/spinlock.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/spinlock.c b/kernel/spinlock.c
index c9bcf1b..b0e7f02 100644
--- a/kernel/spinlock.c
+++ b/kernel/spinlock.c
@@ -117,7 +117,7 @@ EXPORT_SYMBOL(__write_trylock_irqsave);
  * not re-enabled during lock-acquire (which the preempt-spin-ops do):
  */
 #if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP) || \
-	defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_PREEMPT_RT)
+	defined(CONFIG_DEBUG_LOCK_ALLOC)
 
 void __lockfunc __read_lock(raw_rwlock_t *lock)
 {

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ