[<prev] [next>] [day] [month] [year] [list]
Message-ID: <158923077669.390.10389843741359424479.tip-bot2@tip-bot2>
Date: Mon, 11 May 2020 20:59:36 -0000
From: "tip-bot2 for Paul E. McKenney" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Paul E. McKenney" <paulmck@...nel.org>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: core/rcu] rcutorture: Add test of holding scheduler locks
across rcu_read_unlock()
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 52b1fc3f798d02a3a9d1cf7a84e98a795223410a
Gitweb: https://git.kernel.org/tip/52b1fc3f798d02a3a9d1cf7a84e98a795223410a
Author: Paul E. McKenney <paulmck@...nel.org>
AuthorDate: Sat, 28 Mar 2020 18:53:25 -07:00
Committer: Paul E. McKenney <paulmck@...nel.org>
CommitterDate: Mon, 27 Apr 2020 11:03:50 -07:00
rcutorture: Add test of holding scheduler locks across rcu_read_unlock()
Now that it should be safe to hold scheduler locks across
rcu_read_unlock(), even in cases where the corresponding RCU read-side
critical section might have been preempted and boosted, the commit adds
a test of this capability to rcutorture. This has been tested on current
mainline (which can deadlock in this situation), and lockdep duly reported
the expected deadlock. On -rcu, lockdep is silent, thus far, anyway.
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/rcu/rcutorture.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 5453bd5..b348cf8 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1147,6 +1147,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
struct torture_random_state *trsp,
struct rt_read_seg *rtrsp)
{
+ unsigned long flags;
int idxnew = -1;
int idxold = *readstate;
int statesnew = ~*readstate & newstate;
@@ -1181,8 +1182,15 @@ static void rcutorture_one_extend(int *readstate, int newstate,
rcu_read_unlock_bh();
if (statesold & RCUTORTURE_RDR_SCHED)
rcu_read_unlock_sched();
- if (statesold & RCUTORTURE_RDR_RCU)
+ if (statesold & RCUTORTURE_RDR_RCU) {
+ bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
+
+ if (lockit)
+ raw_spin_lock_irqsave(¤t->pi_lock, flags);
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
+ if (lockit)
+ raw_spin_unlock_irqrestore(¤t->pi_lock, flags);
+ }
/* Delay if neither beginning nor end and there was a change. */
if ((statesnew || statesold) && *readstate && newstate)
Powered by blists - more mailing lists