[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201118151038.GX3121392@hirez.programming.kicks-ass.net>
Date: Wed, 18 Nov 2020 16:10:38 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Dmitry Vyukov <dvyukov@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v3] lockdep: Allow tuning tracing capacity constants.
On Wed, Nov 18, 2020 at 11:30:05PM +0900, Tetsuo Handa wrote:
> The problem is that we can't know what exactly is consuming these resources.
> My question is do you have a plan to make it possible to know what exactly is
> consuming these resources.
I'm pretty sure it's in /proc/lockdep* somewhere.
IIRC you were suffering from "MAX_LOCKDEP_ENTRIES too low!", which you
find in alloc_list_entry(), that's used in add_lock_to_list(), which in
turn is used in check_prev_add() to add to ->locks_after and
->locks_before.
/me frobs in lockdep_proc.c and finds l_show() uses locks_after,
l_show() is the implementation for /proc/lockdep.
So something like:
$ grep "^ ->" /proc/lockdep | wc -l
5064
Should be roughly half (it only shows the forward dependencies) the
number of list_entries used.
/proc/lockdep_stats: direct dependencies: 11886 [max: 32768]
gives the number of list_entries used
Trick then is finding where they all go, because as you can see, my
machine is nowhere near saturated, even though it's been running a few
days.
So, go forth and analyize your problem.
Powered by blists - more mailing lists