[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230203200138.3872873-4-jstultz@google.com>
Date: Fri, 3 Feb 2023 20:01:38 +0000
From: John Stultz <jstultz@...gle.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: John Stultz <jstultz@...gle.com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Joel Fernandes <joel@...lfernandes.org>,
Juri Lelli <juri.lelli@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>
Subject: [PATCH v2 4/4] locktorture: With nested locks, occasionally skip main lock
If we're using nested locking to stress things, occasionally
skip taking the main lock, so that we can get some different
contention patterns between the writers (to hopefully get two
disjoint blocked trees)
This patch was inspired by earlier work by Connor O'Brien.
Comments or feedback would be greatly appreciated!
Cc: Davidlohr Bueso <dave@...olabs.net>
Cc: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Josh Triplett <josh@...htriplett.org>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Valentin Schneider <vschneid@...hat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: John Stultz <jstultz@...gle.com>
---
kernel/locking/locktorture.c | 28 ++++++++++++++++++----------
1 file changed, 18 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 5fb17a5057b5..6f56dcb8a496 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -745,6 +745,7 @@ static int lock_torture_writer(void *arg)
int tid = lwsp - cxt.lwsa;
DEFINE_TORTURE_RANDOM(rand);
u32 lockset_mask;
+ bool skip_main_lock;
VERBOSE_TOROUT_STRING("lock_torture_writer task started");
set_user_nice(current, MAX_NICE);
@@ -754,21 +755,28 @@ static int lock_torture_writer(void *arg)
schedule_timeout_uninterruptible(1);
lockset_mask = torture_random(&rand);
+ skip_main_lock = nlocks && !(torture_random(&rand) % 100);
+
cxt.cur_ops->task_boost(&rand);
if (cxt.cur_ops->nested_lock)
cxt.cur_ops->nested_lock(tid, lockset_mask);
- cxt.cur_ops->writelock(tid);
- if (WARN_ON_ONCE(lock_is_write_held))
- lwsp->n_lock_fail++;
- lock_is_write_held = true;
- if (WARN_ON_ONCE(atomic_read(&lock_is_read_held)))
- lwsp->n_lock_fail++; /* rare, but... */
- lwsp->n_lock_acquired++;
+ if (!skip_main_lock) {
+ cxt.cur_ops->writelock(tid);
+ if (WARN_ON_ONCE(lock_is_write_held))
+ lwsp->n_lock_fail++;
+ lock_is_write_held = true;
+ if (WARN_ON_ONCE(atomic_read(&lock_is_read_held)))
+ lwsp->n_lock_fail++; /* rare, but... */
+
+ lwsp->n_lock_acquired++;
+ }
cxt.cur_ops->write_delay(&rand);
- lock_is_write_held = false;
- WRITE_ONCE(last_lock_release, jiffies);
- cxt.cur_ops->writeunlock(tid);
+ if (!skip_main_lock) {
+ lock_is_write_held = false;
+ WRITE_ONCE(last_lock_release, jiffies);
+ cxt.cur_ops->writeunlock(tid);
+ }
if (cxt.cur_ops->nested_unlock)
cxt.cur_ops->nested_unlock(tid, lockset_mask);
--
2.39.1.519.gcb327c4b5f-goog
Powered by blists - more mailing lists