lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190424070959.GE4038@hirez.programming.kicks-ass.net>
Date:   Wed, 24 Apr 2019 09:09:59 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Waiman Long <longman@...hat.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        the arch/x86 maintainers <x86@...nel.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 14/16] locking/rwsem: Guard against making count
 negative

On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
> That is true in general, but doing preempt_disable/enable across
> function boundary is ugly and prone to further problems down the road.

We do worse things in this code, and the thing Linus proposes is
actually quite simple, something like so:

---
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -912,7 +904,7 @@ rwsem_down_read_slowpath(struct rw_semap
 			raw_spin_unlock_irq(&sem->wait_lock);
 			break;
 		}
-		schedule();
+		schedule_preempt_disabled();
 		lockevent_inc(rwsem_sleep_reader);
 	}
 
@@ -1121,6 +1113,7 @@ static struct rw_semaphore *rwsem_downgr
  */
 inline void __down_read(struct rw_semaphore *sem)
 {
+	preempt_disable();
 	if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
 			&sem->count) & RWSEM_READ_FAILED_MASK)) {
 		rwsem_down_read_slowpath(sem, TASK_UNINTERRUPTIBLE);
@@ -1129,10 +1122,12 @@ inline void __down_read(struct rw_semaph
 	} else {
 		rwsem_set_reader_owned(sem);
 	}
+	preempt_enable();
 }
 
 static inline int __down_read_killable(struct rw_semaphore *sem)
 {
+	preempt_disable();
 	if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
 			&sem->count) & RWSEM_READ_FAILED_MASK)) {
 		if (IS_ERR(rwsem_down_read_slowpath(sem, TASK_KILLABLE)))
@@ -1142,6 +1137,7 @@ static inline int __down_read_killable(s
 	} else {
 		rwsem_set_reader_owned(sem);
 	}
+	preempt_enable();
 	return 0;
 }
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ