[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <004501d3216d$703bde80$50b39b80$@lge.com>
Date: Wed, 30 Aug 2017 17:53:23 +0900
From: "Byungchul Park" <byungchul.park@....com>
To: "'Peter Zijlstra'" <peterz@...radead.org>,
"'Sergey Senozhatsky'" <sergey.senozhatsky.work@...il.com>
Cc: "'Bart Van Assche'" <Bart.VanAssche@....com>,
<linux-kernel@...r.kernel.org>, <linux-block@...r.kernel.org>,
<martin.petersen@...cle.com>, <axboe@...nel.dk>,
<linux-scsi@...r.kernel.org>, <sfr@...b.auug.org.au>,
<linux-next@...r.kernel.org>, <kernel-team@....com>
Subject: RE: possible circular locking dependency detected [was: linux-next: Tree for Aug 22]
> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@...radead.org]
> Sent: Wednesday, August 30, 2017 5:48 PM
> To: Sergey Senozhatsky
> Cc: Byungchul Park; Bart Van Assche; linux-kernel@...r.kernel.org; linux-
> block@...r.kernel.org; martin.petersen@...cle.com; axboe@...nel.dk; linux-
> scsi@...r.kernel.org; sfr@...b.auug.org.au; linux-next@...r.kernel.org;
> kernel-team@....com
> Subject: Re: possible circular locking dependency detected [was: linux-
> next: Tree for Aug 22]
>
> On Wed, Aug 30, 2017 at 10:42:07AM +0200, Peter Zijlstra wrote:
> >
> > So the overhead looks to be spread out over all sorts, which makes it
> > harder to find and fix.
> >
> > stack unwinding is done lots and is fairly expensive, I've not yet
> > checked if crossrelease does too much of that.
>
> Aah, we do an unconditional stack unwind for every __lock_acquire() now.
> It keeps a trace in the xhlocks[].
Yeah.. I also think this is most significant..
>
> Does the below cure most of that overhead?
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 44c8d0d17170..7b872036b72e 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4872,7 +4872,7 @@ static void add_xhlock(struct held_lock *hlock)
> xhlock->trace.max_entries = MAX_XHLOCK_TRACE_ENTRIES;
> xhlock->trace.entries = xhlock->trace_entries;
> xhlock->trace.skip = 3;
> - save_stack_trace(&xhlock->trace);
> + /* save_stack_trace(&xhlock->trace); */
> }
>
> static inline int same_context_xhlock(struct hist_lock *xhlock)
Powered by blists - more mailing lists