lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Oct 2017 15:52:47 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Dmitry Vyukov <dvyukov@...gle.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Byungchul Park <byungchul.park@....com>,
        syzbot 
        <bot+e7353c7141ff7cbb718e4c888a14fa92de41ebaa@...kaller.appspotmail.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
        jglisse@...hat.com, LKML <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org, shli@...com, syzkaller-bugs@...glegroups.com,
        Thomas Gleixner <tglx@...utronix.de>,
        Vlastimil Babka <vbabka@...e.cz>, ying.huang@...el.com,
        kernel-team@....com
Subject: Re: possible deadlock in lru_add_drain_all

On Tue, Oct 31, 2017 at 04:55:32PM +0300, Dmitry Vyukov wrote:

> I noticed that for a simple 2 lock deadlock lockdep prints only 2
> stacks.

3, one of which is useless :-)

For the AB-BA we print where we acquire A (#0) where we acquire B while
holding A #1 and then where we acquire B while holding A the unwind at
point of splat.

The #0 trace is useless.

> FWIW in user-space TSAN we print 4 stacks for such deadlocks,
> namely where A was locked, where B was locked under A, where B was
> locked, where A was locked under B. It makes it easier to figure out
> what happens. However, for this report it seems to be 8 stacks this
> way. So it's probably hard either way.

Right, its a question of performance and overhead I suppose. Lockdep
typically only saves a stack trace when it finds a new link.

So only when we find the AB relation do we save the stacktrace; which
reflects the location where we acquire B. But by that time we've lost
where it was we acquire A.

If we want to save those stacks; we have to save a stacktrace on _every_
lock acquire, simply because we never know ahead of time if there will
be a new link. Doing this is _expensive_.

Furthermore, the space into which we store stacktraces is limited;
since memory allocators use locks we can't very well use dynamic memory
for lockdep -- that would give recursive and robustness issues.

Also, its usually not too hard to find the site where we took A if we
know the site of AB.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ