[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae20c4a1-1591-4b09-6de2-e55c30297d24@redhat.com>
Date: Wed, 17 Mar 2021 09:21:50 -0400
From: Waiman Long <longman@...hat.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>
Subject: Re: [PATCH 4/4] locking/locktorture: Fix incorrect use of
ww_acquire_ctx in ww_mutex test
On 3/17/21 1:16 AM, Davidlohr Bueso wrote:
> On Tue, 16 Mar 2021, Waiman Long wrote:
>
>> The ww_acquire_ctx structure for ww_mutex needs to persist for a
>> complete
>> lock/unlock cycle. In the ww_mutex test in locktorture, however, both
>> ww_acquire_init() and ww_acquire_fini() are called within the lock
>> function only. This causes a lockdep splat of "WARNING: Nested lock
>> was not taken" when lockdep is enabled in the kernel.
>>
>> To fix this problem, we need to move the ww_acquire_fini() after the
>> ww_mutex_unlock() in torture_ww_mutex_unlock(). In other word, we need
>> to pass state information from the lock function to the unlock function.
>
> Right, and afaict this _is_ the way ww_acquire_fini() should be called:
>
> * Releases a w/w acquire context. This must be called _after_ all
> acquired w/w
> * mutexes have been released with ww_mutex_unlock.
>
>> Change the writelock and writeunlock function prototypes to allow that
>> and change the torture_ww_mutex_lock() and torture_ww_mutex_unlock()
>> accordingly.
>
> But wouldn't just making ctx a global variable be enough instead? That
> way
> we don't deal with memory allocation for every lock/unlock operation
> (yuck).
> Plus the ENOMEM would need to be handled/propagated accordingly - the
> code
> really doesn't expect any failure from ->writelock().
The ctx should be per-thread to track potential locking conflict. Since
there are as many locking threads as the number of cpus, we can't use
one global variable to do that. I was thinking about using per-cpu
variable but locktorture kthreads are cpu-bound. That led me to use the
current scheme of allocation at lock and free at unlock.
Another alternative is to add a per-thread init/fini methods to allow
setting up per-thread context that is passed to the locking functions.
By doing that, we only need one kmalloc/kfree pair per running
locktorture kthread.
Cheers,
Longman
Powered by blists - more mailing lists