[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <06E43DA0-9976-4D44-AC72-5ED8A7022FA3@lca.pw>
Date: Sun, 17 May 2020 07:12:33 -0400
From: Qian Cai <cai@....pw>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Ingo Molnar <mingo@...hat.com>,
David Howells <dhowells@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: "BUG: MAX_LOCKDEP_ENTRIES too low" with 6979 "&type->s_umount_key"
> On May 16, 2020, at 9:16 PM, Waiman Long <longman@...hat.com> wrote:
>
> The lock_list table entries are for tracking a lock's forward and backward dependencies. The lockdep_chains isn't the right lockdep file to look at. Instead, check the lockdep files for entries with the maximum BD (backward dependency) + FD (forward dependency). That will give you a better view of which locks are consuming most of the lock_list entries. Also take a look at lockdep_stats for an overall view of how much various table entries are being consumed.
Thanks for the hint. It seems something in vfs is the culprit because every single one of those triggering from path_openat() (vfs_open()) or vfs_get_tree()
When the system after boot, lock_list entries is around 10000. After running LTP syscalls and mm tests, the number is around 20000. Then, it will go all the way over the max (32700) while running LTP fs tests. Most of the time from a test that would read every single file in sysfs.
I’ll decode the lockdep file to see if there is any more clues.
Powered by blists - more mailing lists