lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1549643518.34241.101.camel@acm.org>
Date:   Fri, 08 Feb 2019 08:31:58 -0800
From:   Bart Van Assche <bvanassche@....org>
To:     Will Deacon <will.deacon@....com>
Cc:     Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
        tj@...nel.org, longman@...hat.com, johannes.berg@...el.com,
        linux-kernel@...r.kernel.org,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH v6 00/16] locking/lockdep: Add support for dynamic keys

On Fri, 2019-02-08 at 11:43 +0000, Will Deacon wrote:
> I've also been trying to understand why it's necessary to check both of the
> pending_free entries, and I'm still struggling somewhat. It's true that the
> wakeup in get_pending_free_lock() could lead to both entries being used
> without the RCU call back running in between, however in this scenario then
> any list entries marked for freeing in the first pf will have been unhashed
> and therefore made unreachable to look_up_lock_class().
> 
> So I think the concern remains that entries are somehow remaining visible
> after being zapped.
> 
> You mentioned earlier in the thread that people actually complained about
> list corruption if you only checked the current pf:
> 
>   | The list_del_rcu() call must only happen once. I ran into complaints
>   | reporting that the list_del_rcu() call triggered list corruption. This
>   | change made these complaints disappear.
> 
> Do you have any more details about these complaints (e.g. kernel logs etc)?
> Failing that, any idea how to reproduce them?

Hi Will,

The approach I use to test this patch series is to run the following shell
code for several days:

    git clone https://github.com/osandov/blktests/
    cd blktests
    make
    while ./check -q srp; do :; done

This test not only triggers plenty of lock and unlock calls but also
frequently causes kernel modules to be loaded and unloaded.

The oldest kernel logs I have in the VM I use for testing this patch series
are four weeks old. Sorry but that means that these logs do not go back far
enough to retrieve the list corruption issue I mentioned in a previous
e-mail.

Regarding the concern that "entries somehow remain visible after being
zapped": in a previous version of this patch series a struct list_head was
added in struct lock_list. That list head was used to maintain a linked list
of all elements of the list_entries[] array that are in use. zap_class()
used that list to iterate over all list entries that are in use. With that
approach it was not necessary to check in zap_class() whether or not a list
entry was being removed because it got removed from that list before
zap_class() was called again. I removed that list head because Peter asked
me reduce the amount of memory required at runtime. Using one bitmap to
track list entries that are in use and using two bitmaps to track list
entries that are being freed implies that code that iterates over all
list entries that are in use (zap_class()) must check all three bitmaps. The
only alternative I see when using bitmaps is that zap_class() clears the
bits in list_entries_in_use for bits that are being freed and that
alloc_list_entry() checks the two bitmaps with list entries that are being
freed. I'm not sure whether one of these two approaches is really better
than the other.

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ