[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151020124218.GA3384@gmail.com>
Date: Tue, 20 Oct 2015 08:42:19 -0400
From: Jerome Glisse <j.glisse@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
Jérôme Glisse <jglisse@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Sasha Levin <sasha.levin@...cle.com>
Subject: Re: [PATCH] locking/lockdep: Fix expected depth value in
__lock_release()
On Tue, Oct 20, 2015 at 02:18:53PM +0200, Peter Zijlstra wrote:
> On Mon, Oct 19, 2015 at 03:30:28PM -0400, j.glisse@...il.com wrote:
> > From: Jérôme Glisse <jglisse@...hat.com>
> >
> > In __lock_release() we are removing one entry from the stack and
> > rebuilding the hash chain by re-adding entry above the entry we
> > just removed. If the entry removed was between 2 entry of same
> > class then this 2 entry might be coalesced into one single entry
> > which in turns means that the lockdep_depth value will not be
> > incremented and thus the expected lockdep_depth value after this
> > operation will be wrong triggering an unjustified WARN_ONCE() at
> > the end of __lock_release().
>
> This is the nest_lock stuff, right? Where it checks:
>
> if (hlock->class_idx == class_idx && nest_lock) {
> ...
> return 1;
> }
Yes this code.
>
> What code did you find that triggered this? That is, what code is taking
> nested locks with other locks in the middle? (Not wrong per-se, just
> curious how that would come about).
Well i am not able to reproduce myself but it happens as part of
mm_drop_all_locks() as to which lock does trigger i am unsure as
all the i_mmap_rwsem are taken one after the other and same for
anon_vma rwsem so they should already coalesce properly. My guess
is that code calling all lock also have a mutex and once all vma
lock are drop the mutex coalesce with mm_all_locks_mutex.
>
> > This patch adjust the expect depth value by decrementing it if
> > what was previously 2 entry inside the stack are coalesced into
> > only one entry.
>
> Would it not make more sense to scan the entire hlock stack on
> __lock_acquire() and avoid getting collapsible entries in the first
> place?
>
> Something like so...
It would work too, probably more compute intensive than my solution
but this is lockdep code so i guess it is fine. Also dunno if we loose
any valuable information by not keeping the stack ordered so one
can check order in whick lock are taken.
>
> ---
> kernel/locking/lockdep.c | 22 ++++++++++++++--------
> 1 file changed, 14 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 4e49cc4c9952..6fcd98b86e7b 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -3125,15 +3125,21 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
>
> class_idx = class - lock_classes + 1;
>
> - if (depth) {
> - hlock = curr->held_locks + depth - 1;
> - if (hlock->class_idx == class_idx && nest_lock) {
> - if (hlock->references)
> - hlock->references++;
> - else
> - hlock->references = 2;
> + if (depth && nest_lock) {
> + int i;
>
> - return 1;
> + for (i = depth; i; --i) {
> + hlock = curr->held_locks + i - 1;
> + if (hlock->class_idx == class_idx &&
> + hlock->nest_lock == nest_lock) {
> +
> + if (hlock->references)
> + hlock->references++;
> + else
> + hlock->references = 2;
> +
> + return 1;
> + }
> }
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists