[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171031145804.ulrpk245ih6t7q7h@dhcp22.suse.cz>
Date: Tue, 31 Oct 2017 15:58:04 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dmitry Vyukov <dvyukov@...gle.com>,
Byungchul Park <byungchul.park@....com>,
syzbot
<bot+e7353c7141ff7cbb718e4c888a14fa92de41ebaa@...kaller.appspotmail.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
jglisse@...hat.com, LKML <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, shli@...com, syzkaller-bugs@...glegroups.com,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, ying.huang@...el.com,
kernel-team@....com
Subject: Re: possible deadlock in lru_add_drain_all
On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
[...]
> If we want to save those stacks; we have to save a stacktrace on _every_
> lock acquire, simply because we never know ahead of time if there will
> be a new link. Doing this is _expensive_.
>
> Furthermore, the space into which we store stacktraces is limited;
> since memory allocators use locks we can't very well use dynamic memory
> for lockdep -- that would give recursive and robustness issues.
Wouldn't stackdepot help here? Sure the first stack unwind will be
costly but then you amortize that over time. It is quite likely that
locks are held from same addresses.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists