[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210317051605.popetodgwbr47ha2@offworld>
Date: Tue, 16 Mar 2021 22:16:05 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>
Subject: Re: [PATCH 4/4] locking/locktorture: Fix incorrect use of
ww_acquire_ctx in ww_mutex test
On Tue, 16 Mar 2021, Waiman Long wrote:
>The ww_acquire_ctx structure for ww_mutex needs to persist for a complete
>lock/unlock cycle. In the ww_mutex test in locktorture, however, both
>ww_acquire_init() and ww_acquire_fini() are called within the lock
>function only. This causes a lockdep splat of "WARNING: Nested lock
>was not taken" when lockdep is enabled in the kernel.
>
>To fix this problem, we need to move the ww_acquire_fini() after the
>ww_mutex_unlock() in torture_ww_mutex_unlock(). In other word, we need
>to pass state information from the lock function to the unlock function.
Right, and afaict this _is_ the way ww_acquire_fini() should be called:
* Releases a w/w acquire context. This must be called _after_ all acquired w/w
* mutexes have been released with ww_mutex_unlock.
>Change the writelock and writeunlock function prototypes to allow that
>and change the torture_ww_mutex_lock() and torture_ww_mutex_unlock()
>accordingly.
But wouldn't just making ctx a global variable be enough instead? That way
we don't deal with memory allocation for every lock/unlock operation (yuck).
Plus the ENOMEM would need to be handled/propagated accordingly - the code
really doesn't expect any failure from ->writelock().
diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 0ab94e1f1276..606c0f6c1657 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -362,6 +362,8 @@ static DEFINE_WW_MUTEX(torture_ww_mutex_0, &torture_ww_class);
static DEFINE_WW_MUTEX(torture_ww_mutex_1, &torture_ww_class);
static DEFINE_WW_MUTEX(torture_ww_mutex_2, &torture_ww_class);
+static struct ww_acquire_ctx ctx;
+
static int torture_ww_mutex_lock(void)
__acquires(torture_ww_mutex_0)
__acquires(torture_ww_mutex_1)
@@ -372,7 +374,6 @@ __acquires(torture_ww_mutex_2)
struct list_head link;
struct ww_mutex *lock;
} locks[3], *ll, *ln;
- struct ww_acquire_ctx ctx;
locks[0].lock = &torture_ww_mutex_0;
list_add(&locks[0].link, &list);
@@ -403,7 +404,6 @@ __acquires(torture_ww_mutex_2)
list_move(&ll->link, &list);
}
- ww_acquire_fini(&ctx);
return 0;
}
@@ -415,6 +415,8 @@ __releases(torture_ww_mutex_2)
ww_mutex_unlock(&torture_ww_mutex_0);
ww_mutex_unlock(&torture_ww_mutex_1);
ww_mutex_unlock(&torture_ww_mutex_2);
+
+ ww_acquire_fini(&ctx);
}
static struct lock_torture_ops ww_mutex_lock_ops = {
Powered by blists - more mailing lists