[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191113102855.868390100@infradead.org>
Date: Wed, 13 Nov 2019 11:21:19 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: peterz@...radead.org, mingo@...nel.org, will@...nel.org
Cc: oleg@...hat.com, tglx@...utronix.de, linux-kernel@...r.kernel.org,
bigeasy@...utronix.de, juri.lelli@...hat.com, williams@...hat.com,
bristot@...hat.com, longman@...hat.com, dave@...olabs.net,
jack@...e.com
Subject: [PATCH 4/5] locking/percpu-rwsem: Extract __percpu_down_read_trylock()
In preparation for removing the embedded rwsem and building a custom
lock, extract the read-trylock primitive.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/locking/percpu-rwsem.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -45,7 +45,7 @@ void percpu_free_rwsem(struct percpu_rw_
}
EXPORT_SYMBOL_GPL(percpu_free_rwsem);
-bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
+static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
{
__this_cpu_inc(*sem->read_count);
@@ -70,14 +70,21 @@ bool __percpu_down_read(struct percpu_rw
* If !readers_block the critical section starts here, matched by the
* release in percpu_up_write().
*/
- if (likely(!smp_load_acquire(&sem->readers_block)))
+ if (likely(!atomic_read_acquire(&sem->readers_block)))
return true;
- /*
- * Per the above comment; we still have preemption disabled and
- * will thus decrement on the same CPU as we incremented.
- */
- __percpu_up_read(sem);
+ __this_cpu_dec(*sem->read_count);
+
+ /* Prod writer to re-evaluate readers_active_check() */
+ rcuwait_wake_up(&sem->writer);
+
+ return false;
+}
+
+bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
+{
+ if (__percpu_down_read_trylock(sem))
+ return true;
if (try)
return false;
Powered by blists - more mailing lists