[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87v9ern82n.fsf@nanos.tec.linutronix.de>
Date: Fri, 30 Oct 2020 14:42:56 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, kan.liang@...ux.intel.com,
like.xu@...ux.intel.com
Subject: Re: [BUG] Stack overflow when running perf and function tracer
On Fri, Oct 30 2020 at 12:36, Thomas Gleixner wrote:
> On Fri, Oct 30 2020 at 11:32, Peter Zijlstra wrote:
> So the real question is what else is on that stack which blows it up
> close to 4k? Btw, it would be massively helpful for this kind of crash
> to print the actual stack depth per entry in the backtrace.
>
> Here is the partial stack trace:
> Stack usage
> ring_buffer_lock_reserve+0x12c/0x380
> trace_function+0x27/0x130
> function_trace_call+0x133/0x180
> perf_output_begin+0x4d/0x2d0 64+
> perf_log_throttle+0x9a/0x120 470+
> __perf_event_account_interrupt+0xa9/0x120
> __perf_event_overflow+0x2b/0xf0
> __intel_pmu_pebs_event+0x2ec/0x3e0 760+
> intel_pmu_drain_pebs_nhm+0x268/0x330 200+
> handle_pmi_common+0xc2/0x2b0
So Steven provided a backtrace with the actual stack depth printed:
ring_buffer_lock_reserve+0x12c/0x380 0030 104
trace_function+0x27/0xf0 0098 56
function_trace_call+0x124/0x190 00d0 224
__rcu_read_lock+0x5/0x20 01b0 8
perf_output_begin+0x4d/0x2d0 01b8 640
perf_log_throttle+0x9a/0x120 0438 624
__perf_event_account_interrupt+0xa6/0x120 06a8 32
__perf_event_overflow+0x2b/0xf0 06c8 48
__intel_pmu_pebs_event+0x2ec/0x3e0 06f8 960
intel_pmu_drain_pebs_nhm+0x268/0x330 0ab8 256
handle_pmi_common+0xc2/0x2b0 0bb8 584
intel_pmu_handle_irq+0xc8/0x160 0e00 64
perf_event_nmi_handler+0x28/0x50 0e40 32
nmi_handle+0x80/0x190 0e60 64
default_do_nmi+0x6b/0x170 0ea0 40
exc_nmi+0x15d/0x1a0 0ec8 40
end_repeat_nmi+0x16/0x55 0ef0 272
So I missed perf_output_begin and handle_pmi_common ...
Powered by blists - more mailing lists