[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <607c446bc8d3a0cc6e96aa9792e075913ad6b2c6.camel@redhat.com>
Date: Wed, 16 Sep 2020 17:11:59 -0400
From: Qian Cai <cai@...hat.com>
To: Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Jonathan Corbet <corbet@....net>,
Waiman Long <longman@...hat.com>
Subject: Re: [RFC v7 11/19] lockdep: Fix recursive read lock related
safe->unsafe detection
On Thu, 2020-09-17 at 00:14 +0800, Boqun Feng wrote:
> Found a way to resolve this while still keeping the BFS. Every time when
> we want to enqueue a lock_list, we basically enqueue a whole dep list of
> entries from the previous lock_list, so we can use a trick here: instead
> enqueue all the entries, we only enqueue the first entry and we can
> fetch other silbing entries with list_next_or_null_rcu(). Patch as
> below, I also took the chance to clear the code up and add more
> comments. I could see this number (in /proc/lockdep_stats):
>
> max bfs queue depth: 201
>
> down to (after apply this patch)
>
> max bfs queue depth: 61
>
> with x86_64_defconfig along with lockdep and selftest configs.
>
> Qian, could you give it a try?
It works fine as the number went down from around 3000 to 500 on our workloads.
Powered by blists - more mailing lists