[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171017162124.n6epaodlx3q56xw4@gmail.com>
Date: Tue, 17 Oct 2017 18:21:24 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Byungchul Park <byungchul.park@....com>, johan@...nel.org,
arnd@...db.de, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, peterz@...radead.org, hpa@...or.com,
tony@...mide.com, linux-tip-commits@...r.kernel.org,
kernel-team@....com
Subject: Re: [tip:locking/urgent] locking/lockdep: Disable cross-release
features for now
* Thomas Gleixner <tglx@...utronix.de> wrote:
> On Tue, 17 Oct 2017, Ingo Molnar wrote:
> > * Thomas Gleixner <tglx@...utronix.de> wrote:
> > > On Tue, 17 Oct 2017, Ingo Molnar wrote:
> > > > No, please fix performance.
> > >
> > > You know very well that with the cross release stuff we have to take the
> > > performance hit of stack unwinding because we have no idea whether there
> > > will show up a new lock relation later or not. And there is not much you
> > > can do in that respect.
> > >
> > > OTOH, the cross release feature unearthed real deadlocks already so it is a
> > > valuable debug feature and having an explicit config switch which defaults
> > > to N is well worth it.
> >
> > I disagree, because even if that's correct, the choices are not binary. The
> > performance regression was a slowdown of around 7x: lockdep boot overhead on that
> > particula system went from +3 seconds to +21 seconds...
>
> Hmm, I might have missed something, but what I've seen in this thread is:
>
> > > > Boot time (from "Linux version" to login prompt) had in fact doubled
> > > > since 4.13 where it took 17 seconds (with my current config) compared to
> > > > the 35 seconds I now see with 4.14-rc4.
>
> So that's 2x not 7x. [...]
Yeah, so what you missed I think is that the no-lockdep bootup time is 14 seconds.
So we have:
vanilla: 14 secs
lockdep: 17 secs (+3 secs)
lockdep+crossrelease: 35 secs (+21 secs)
So lockdep overhead got 7x worse on this system.
Thanks,
Ingo
Powered by blists - more mailing lists