lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 09 Aug 2017 14:04:48 +0200
From:   Mike Galbraith <efault@....de>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-rt-users <linux-rt-users@...r.kernel.org>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: [patch-rt] locking, rwlock-rt: do not save state multiple times in
 __write_rt_lock()


Save state prior to entering the acquisition loop, otherwise we may
initially see readers, but upon releasing ->wait_lock see none, loop
back around, and having not slept, save TASK_UNINTERRUPTIBLE.

Signed-off-by_ Mike Galbraith <efault@....de>
---
 kernel/locking/rwlock-rt.c |   37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

--- a/kernel/locking/rwlock-rt.c
+++ b/kernel/locking/rwlock-rt.c
@@ -190,30 +190,33 @@ void __sched __write_rt_lock(struct rt_r
 	/* Force readers into slow path */
 	atomic_sub(READER_BIAS, &lock->readers);
 
-	for (;;) {
-		raw_spin_lock_irqsave(&m->wait_lock, flags);
-
-		raw_spin_lock(&self->pi_lock);
-		self->saved_state = self->state;
-		__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
-		raw_spin_unlock(&self->pi_lock);
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	raw_spin_lock(&self->pi_lock);
+	self->saved_state = self->state;
+	__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+	raw_spin_unlock(&self->pi_lock);
 
+	for (;;) {
 		/* Have all readers left the critical region? */
-		if (!atomic_read(&lock->readers)) {
-			atomic_set(&lock->readers, WRITER_BIAS);
-			raw_spin_lock(&self->pi_lock);
-			__set_current_state_no_track(self->saved_state);
-			self->saved_state = TASK_RUNNING;
-			raw_spin_unlock(&self->pi_lock);
-			raw_spin_unlock_irqrestore(&m->wait_lock, flags);
-			return;
-		}
+		if (!atomic_read(&lock->readers))
+			break;
 
 		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
-
 		if (atomic_read(&lock->readers) != 0)
 			schedule();
+		raw_spin_lock_irqsave(&m->wait_lock, flags);
+
+		raw_spin_lock(&self->pi_lock);
+		__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+		raw_spin_unlock(&self->pi_lock);
 	}
+
+	atomic_set(&lock->readers, WRITER_BIAS);
+	raw_spin_lock(&self->pi_lock);
+	__set_current_state_no_track(self->saved_state);
+	self->saved_state = TASK_RUNNING;
+	raw_spin_unlock(&self->pi_lock);
+	raw_spin_unlock_irqrestore(&m->wait_lock, flags);
 }
 
 int __write_rt_trylock(struct rt_rw_lock *lock)

Powered by blists - more mailing lists