[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181002090302.GA116695@gmail.com>
Date: Tue, 2 Oct 2018 11:03:02 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] locking/lockdep: Add a faster path in
__lock_release()
* Waiman Long <longman@...hat.com> wrote:
> When __lock_release() is called, the most likely unlock scenario is
> on the innermost lock in the chain. In this case, we can skip some of
> the checks and provide a faster path to completion.
>
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/lockdep.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index add0468..ca002c0 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -3625,6 +3625,13 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
> curr->lockdep_depth = i;
> curr->curr_chain_key = hlock->prev_chain_key;
>
> + /*
> + * The most likely case is when the unlock is on the innermost
> + * lock. In this case, we are done!
> + */
> + if (i == depth - 1)
> + return 1;
> +
> if (reacquire_held_locks(curr, depth, i + 1))
> return 0;
>
> @@ -3632,10 +3639,14 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
> * We had N bottles of beer on the wall, we drank one, but now
> * there's not N-1 bottles of beer left on the wall...
> */
> - if (DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - 1))
> - return 0;
> + DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - 1);
>
> - return 1;
> + /*
> + * Since reacquire_held_locks() would have called check_chain_key()
> + * indirectly via __lock_acquire(), we don't need to do it again
> + * on return.
> + */
> + return 0;
Minor nit:
s/depth - 1/depth-1
for slightly better readability.
Thanks,
Ingo
Powered by blists - more mailing lists