lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7f4151d-6514-be7b-1915-37f19411ca96@redhat.com>
Date:   Tue, 4 Feb 2020 11:57:09 -0500
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
        linux-kernel@...r.kernel.org, Bart Van Assche <bvanassche@....org>
Subject: Re: [PATCH v5 6/7] locking/lockdep: Reuse freed chain_hlocks entries

On 2/4/20 11:26 AM, Waiman Long wrote:
> On 2/4/20 11:12 AM, Waiman Long wrote:
>> On 2/4/20 10:42 AM, Peter Zijlstra wrote:
>>> On Mon, Feb 03, 2020 at 11:41:46AM -0500, Waiman Long wrote:
>>>> +	/*
>>>> +	 * We require a minimum of 2 (u16) entries to encode a freelist
>>>> +	 * 'pointer'.
>>>> +	 */
>>>> +	req = max(req, 2);
>>> Would something simple like the below not avoid that whole 1 entry
>>> 'chain' nonsense?
>>>
>>> It boots and passes the selftests, so it must be perfect :-)
>>>
>>> --- a/kernel/locking/lockdep.c
>>> +++ b/kernel/locking/lockdep.c
>>> @@ -3163,7 +3163,7 @@ static int validate_chain(struct task_st
>>>  	 * (If lookup_chain_cache_add() return with 1 it acquires
>>>  	 * graph_lock for us)
>>>  	 */
>>> -	if (!hlock->trylock && hlock->check &&
>>> +	if (!chain_head && !hlock->trylock && hlock->check &&
>>>  	    lookup_chain_cache_add(curr, hlock, chain_key)) {
>>>  		/*
>>>  		 * Check whether last held lock:
>>>
>> Well, I think that will eliminate the 1-entry chains for the process
>> context. However, we can still have 1-entry chain in the irq context, I
>> think, as long as there are process context locks in front of it.
>>
>> I think this fix is still worthwhile as it will eliminate some of the
>> 1-entry chains.
> Sorry, I think I mis-read the code. This patch will eliminate some
> cross-context check. How  about something like
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 32406ef0d6a2..d746897b638f 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -2931,7 +2931,7 @@ static int validate_chain(struct task_struct *curr,
>          * (If lookup_chain_cache_add() return with 1 it acquires
>          * graph_lock for us)
>          */
> -       if (!hlock->trylock && hlock->check &&
> +       if ((chain_head != 1) && !hlock->trylock && hlock->check &&
>             lookup_chain_cache_add(curr, hlock, chain_key)) {
>                 /*
>                  * Check whether last held lock:
> @@ -3937,7 +3937,7 @@ static int __lock_acquire(struct lockdep_map
> *lock, unsign
>         hlock->prev_chain_key = chain_key;
>         if (separate_irq_context(curr, hlock)) {
>                 chain_key = INITIAL_CHAIN_KEY;
> -               chain_head = 1;
> +               chain_head = 2; /* Head of irq context chain */
>         }
>         chain_key = iterate_chain_key(chain_key, class_idx);

Wait, it is possible that we can have deadlock like this:

  cpu 0               cpu 1
  -----               -----
  lock A              lock B
  <irq>               <irq>
  lock B              lock A
 
If we eliminate 1-entry chain, will that impact our ability to detect this
kind of deadlock?

Thanks,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ