[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200917015339.GE127490@debian-boqun.qqnc3lrjykvubdpftowmye0fmh.lx.internal.cloudapp.net>
Date: Thu, 17 Sep 2020 09:53:39 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Qian Cai <cai@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Jonathan Corbet <corbet@....net>,
Waiman Long <longman@...hat.com>
Subject: Re: [RFC v7 11/19] lockdep: Fix recursive read lock related
safe->unsafe detection
On Wed, Sep 16, 2020 at 05:11:59PM -0400, Qian Cai wrote:
> On Thu, 2020-09-17 at 00:14 +0800, Boqun Feng wrote:
> > Found a way to resolve this while still keeping the BFS. Every time when
> > we want to enqueue a lock_list, we basically enqueue a whole dep list of
> > entries from the previous lock_list, so we can use a trick here: instead
> > enqueue all the entries, we only enqueue the first entry and we can
> > fetch other silbing entries with list_next_or_null_rcu(). Patch as
> > below, I also took the chance to clear the code up and add more
> > comments. I could see this number (in /proc/lockdep_stats):
> >
> > max bfs queue depth: 201
> >
> > down to (after apply this patch)
> >
> > max bfs queue depth: 61
> >
> > with x86_64_defconfig along with lockdep and selftest configs.
> >
> > Qian, could you give it a try?
>
> It works fine as the number went down from around 3000 to 500 on our workloads.
>
Thanks, let me send a proper patch. I will add a Reported-by tag from
you.
Regards,
Boqun
Powered by blists - more mailing lists