[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160708142929.lvxgapbxfv5wfbk2@treble>
Date: Fri, 8 Jul 2016 09:29:29 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Byungchul Park <byungchul.park@....com>, peterz@...radead.org,
linux-kernel@...r.kernel.org, walken@...gle.com,
Frédéric Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 1/2] x86/dumpstack: Optimize save_stack_trace
On Fri, Jul 08, 2016 at 12:08:19PM +0200, Ingo Molnar wrote:
>
> * Byungchul Park <byungchul.park@....com> wrote:
>
> > On Mon, Jul 04, 2016 at 07:27:54PM +0900, Byungchul Park wrote:
> > > I suggested this patch on https://lkml.org/lkml/2016/6/20/22. However,
> > > I want to proceed saperately since it's somewhat independent from each
> > > other. Frankly speaking, I want this patchset to be accepted at first so
> > > that the crossfeature can use this optimized save_stack_trace_norm()
> > > which makes crossrelease work smoothly.
> >
> > What do you think about this way to improve it?
>
> I like both of your improvements, the speed up is impressive:
>
> [ 2.327597] save_stack_trace() takes 87114 ns
> ...
> [ 2.781694] save_stack_trace() takes 20044 ns
> ...
> [ 3.103264] save_stack_trace takes 3821 (sched_lock)
>
> Could you please also measure call graph recording (perf record -g), how much
> faster does it get with your patches and what are our remaining performance hot
> spots?
>
> Could you please merge your patches to the latest -tip tree, because this commit I
> merged earlier today:
>
> 81c2949f7fdc x86/dumpstack: Add show_stack_regs() and use it
>
> conflicts with your patches. (I'll push this commit out later today.)
>
> Also, could you please rename the _norm names to _fast or so, to signal that this
> is a faster but less reliable method to get a stack dump? Nobody knows what
> '_norm' means, but '_fast' is pretty self-explanatory.
Hm, but is print_context_stack_bp() variant really less reliable? From
what I can tell, its only differences vs print_context_stack() are:
- It doesn't scan the stack for "guesses" (which are 'unreliable' and
are ignored by the ops->address() callback anyway).
- It stops if ops->address() returns an error (which in this case means
the array is full anyway).
- It stops if the address isn't a kernel text address. I think this
shouldn't normally be possible unless there's some generated code like
bpf on the stack. Maybe it could be slightly improved for this case.
So instead of adding a new save_stack_trace_fast() variant, why don't we
just modify the existing save_stack_trace() to use
print_context_stack_bp()?
--
Josh
Powered by blists - more mailing lists