[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403151157.dcjmunsl7mna4ore@treble>
Date: Wed, 3 Apr 2019 10:11:57 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>
Subject: Re: [patch 15/14] x86/dumpstack/64: Speedup in_exception_stack()
On Wed, Apr 03, 2019 at 10:10:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 10:08:28AM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 02, 2019 at 10:51:49AM -0500, Josh Poimboeuf wrote:
> > > On Tue, Apr 02, 2019 at 05:48:56PM +0200, Thomas Gleixner wrote:
> > > > > With the above "(stk <= begin || stk >= end)" check, removing the loop
> > > > > becomes not all that important since exception stack dumps are quite
> > > > > rare and not performance sensitive. With all the macros this code
> > > > > becomes a little more obtuse, so I'm not sure whether removal of the
> > > > > loop is a net positive.
> > > >
> > > > What about perf? It's NMI context and probably starts from there. Peter?
> > >
> > > I believe perf unwinds starting from the regs from the context which was
> > > interrupted by the NMI.
> >
> > Aah, indeed. So then we only see exception stacks when the NMI lands in
> > an exception, which is, as you say, quite rare.
>
> Aah, ftrace OTOH might still trigger this lots. When you do function
> tracer with stacktrace enabled it'll do unwinds _everywhere_.
Even then, ftrace stacktrace will be really slow regardless, and this
loop removal would be a tiny performance improvement for a tiny fraction
of those stack traces. Unless the improvement is measurable I would
personally rather err on the side of code readability.
--
Josh
Powered by blists - more mailing lists