[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <158142530316.411.8935420422825778955.tip-bot2@tip-bot2>
Date: Tue, 11 Feb 2020 12:48:23 -0000
From: "tip-bot2 for Peter Zijlstra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Davidlohr Bueso <dbueso@...e.de>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: locking/core] locking/percpu-rwsem: Extract
__percpu_down_read_trylock()
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 75ff64572e497578e238fefbdff221c96f29067a
Gitweb: https://git.kernel.org/tip/75ff64572e497578e238fefbdff221c96f29067a
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Thu, 31 Oct 2019 12:34:23 +01:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Tue, 11 Feb 2020 13:10:55 +01:00
locking/percpu-rwsem: Extract __percpu_down_read_trylock()
In preparation for removing the embedded rwsem and building a custom
lock, extract the read-trylock primitive.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Davidlohr Bueso <dbueso@...e.de>
Acked-by: Will Deacon <will@...nel.org>
Acked-by: Waiman Long <longman@...hat.com>
Tested-by: Juri Lelli <juri.lelli@...hat.com>
Link: https://lkml.kernel.org/r/20200131151540.098485539@infradead.org
---
kernel/locking/percpu-rwsem.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index becf925..b155e8e 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -45,7 +45,7 @@ void percpu_free_rwsem(struct percpu_rw_semaphore *sem)
}
EXPORT_SYMBOL_GPL(percpu_free_rwsem);
-bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
+static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
{
__this_cpu_inc(*sem->read_count);
@@ -73,11 +73,18 @@ bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
if (likely(!smp_load_acquire(&sem->readers_block)))
return true;
- /*
- * Per the above comment; we still have preemption disabled and
- * will thus decrement on the same CPU as we incremented.
- */
- __percpu_up_read(sem);
+ __this_cpu_dec(*sem->read_count);
+
+ /* Prod writer to re-evaluate readers_active_check() */
+ rcuwait_wake_up(&sem->writer);
+
+ return false;
+}
+
+bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
+{
+ if (__percpu_down_read_trylock(sem))
+ return true;
if (try)
return false;
Powered by blists - more mailing lists